Rozprawy doktorskie na temat „Automaton inference”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Automaton inference”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Ansin, Rasmus, i Didrik Lundberg. "Automated Inference of Excitable Cell Models as Hybrid Automata". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154065.
Pełny tekst źródłaI denna uppsats undersöker vi från en experimentell synvinkel möjligheter och begränsningar i den nya inlärningsalgoritmen HYCGE för hybridautomater. Som ett exempel på en praktisk tillämpning, studerar vi algoritmens förmåga att lära sig aktionspotentialens beteende i retbara celler, specifikt Hodgkin-Huxleymodellen av en bläckfisks jätteaxon, Luo-Rudymodellen av en ventrikulärcell i marsvin, och Entchevas modell av en ventrikulär cell i nyfödd råtta .Giltigheten och noggrannheten hos algoritmen visualiseras även genom grafiskamedel.
Rasoamanana, Aina Toky. "Derivation and Analysis of Cryptographic Protocol Implementation". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS005.
Pełny tekst źródłaTLS and SSH are two well-known and thoroughly studied security protocols. In this thesis, we focus on a specific class of vulnerabilities affecting both protocols implementations, state machine errors. These vulnerabilities are caused by differences in interpreting the standard and correspond to deviations from the specifications, e.g. accepting invalid messages, or accepting valid messages out of sequence.We develop a generalized and systematic methodology to infer the protocol state machines such as the major TLS and SSH stacks from stimuli and observations, and to study their evolution across revisions. We use the L* algorithm to compute state machines corresponding to different execution scenarios.We reproduce several known vulnerabilities (denial of service, authentication bypasses), and uncover new ones. We also show that state machine inference is efficient and practical enough in many cases for integration within a continuous integration pipeline, to help find new vulnerabilities or deviations introduced during development.With our systematic black-box approach, we study over 600 different versions of server and client implementations in various scenarios (protocol versions, options). Using the resulting state machines, we propose a robust algorithm to fingerprint TLS and SSH stacks. To the best of our knowledge, this is the first application of this approach on such a broad perimeter, in terms of number of TLS and SSH stacks, revisions, or execution scenarios studied
Gransden, Thomas Glenn. "Automating proofs with state machine inference". Thesis, University of Leicester, 2017. http://hdl.handle.net/2381/40814.
Pełny tekst źródłaPaige, Timothy Brooks. "Automatic inference for higher-order probabilistic programs". Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:d912c4de-4b08-4729-aa19-766413735e2a.
Pełny tekst źródłaMERINO, JORGE SALVADOR PAREDES. "AUTOMATIC SYNTHESIS OF FUZZY INFERENCE SYSTEMS FOR CLASSIFICATION". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=27007@1.
Pełny tekst źródłaCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
Hoje em dia, grande parte do conhecimento acumulado está armazenado em forma de dados. Para muitos problemas de classificação, tenta-se aprender a relação entre um conjunto de variáveis (atributos) e uma variável alvo de interesse. Dentre as ferramentas capazes de atuar como modelos representativos de sistemas reais, os Sistemas de Inferência Fuzzy são considerados excelentes com respeito à representação do conhecimento de forma compreensível, por serem baseados em regras linguísticas. Este quesito de interpretabilidade linguística é relevante em várias aplicações em que não se deseja apenas um modelo do tipo caixa preta, que, por mais precisão que proporcione, não fornece uma explicação de como os resultados são obtidos. Esta dissertação aborda o desenvolvimento de um Sistema de Inferência Fuzzy de forma automática, buscando uma base de regras que valorize a interpretabilidade linguística e que, ao mesmo tempo, forneça uma boa acurácia. Para tanto, é proposto o modelo AutoFIS-Class, um método automático para a geração de Sistemas de Inferência Fuzzy para problemas de classificação. As características do modelo são: (i) geração de premissas que garantam critérios mínimos de qualidade, (ii) associação de cada premissa a um termo consequente mais compatível e (iii) agregação de regras de uma mesma classe por meio de operadores que ponderem a influência de cada regra. O modelo proposto é avaliado em 45 bases de dados benchmark e seus resultados são comparados com modelos da literatura baseados em Algoritmos Evolucionários. Os resultados comprovam que o Sistema de Inferência gerado é competitivo, apresentando uma boa acurácia com um baixo número de regras.
Nowadays, much of the accumulated knowledge is stored as data. In many classification problems the relationship between a set of variables (attributes) and a target variable of interest must be learned. Among the tools capable of modeling real systems, Fuzzy Inference Systems are considered excellent with respect to the knowledge representation in a comprehensible way, as they are based on inference rules. This is relevant in applications where a black box model does not suffice. This model may attain good accuracy, but does not explain how results are obtained. This dissertation presents the development of a Fuzzy Inference System in an automatic manner, where the rule base should favour linguistic interpretability and at the same time provide good accuracy. In this sense, this work proposes the AutoFIS-Class model, an automatic method for generating Fuzzy Inference Systems for classification problems. Its main features are: (i) generation of premises to ensure minimum, quality criteria, (ii) association of each rule premise to the most compatible consequent term; and (iii) aggregation of rules for each class through operator that weigh the relevance of each rule. The proposed model was evaluated for 45 datasets and their results were compared to existing models based on Evolutionary Algorithms. Results show that the proposed Fuzzy Inference System is competitive, presenting good accuracy with a low number of rules.
Rainforth, Thomas William Gamlen. "Automating inference, learning, and design using probabilistic programming". Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:e276f3b4-ff1d-44bf-9d67-013f68ce81f0.
Pełny tekst źródłaDixon, Heidi. "Automating pseudo-Boolean inference within a DPLL framework /". view abstract or download file of text, 2004. http://wwwlib.umi.com/cr/uoregon/fullcit?p3153782.
Pełny tekst źródłaTypescript. Includes vita and abstract. Includes bibliographical references (leaves 140-146). Also available for download via the World Wide Web; free to University of Oregon users.
MacNish, Craig Gordon. "Nonmonotonic inference systems for modelling dynamic processes". Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240195.
Pełny tekst źródłaLin, Ye. "Internet data extraction based on automatic regular expression inference". [Ames, Iowa : Iowa State University], 2007.
Znajdź pełny tekst źródłaEl, Kaliouby Rana Ayman. "Mind-reading machines : automated inference of complex mental states". Thesis, University of Cambridge, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.615030.
Pełny tekst źródłaMugambi, Ernest Muthomi. "Automated inference of comprehensible models for medical data mining". Thesis, University of Sunderland, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425238.
Pełny tekst źródłaSerrano, Lucas. "Automatic inference of system software transformation rules from examples". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS425.
Pełny tekst źródłaThe Linux kernel is present today in all kinds of computing environments, from smartphones to supercomputers, including both the latest hardware and "ancient" systems. This multiplicity of environments has come at the expense of a large code size, of approximately ten million lines of code, dedicated to device drivers. However, to add new functionalities, or for performance or security reasons, some internal Application Programming Interfaces (APIs) can be redesigned, triggering the need for changes of potentially thousands of drivers using them.This thesis proposes a novel approach, Spinfer, that can automatically perform these API usage updates. This new approach, based on pattern assembly constrained by control-flow relationships, can learn transformation rules from even imperfect examples. Learned rules are suitable for the challenges found in Linux kernel API usage updates
Lipovetzky, Nir. "Structure and inference in classical planning". Doctoral thesis, Universitat Pompeu Fabra, 2012. http://hdl.handle.net/10803/101416.
Pełny tekst źródłaLos problemas en planificación clásica consisten en encontrar la secuencia de acciones que lleve a un agente a su objetivo desde un estado inicial, asumiendo que los efectos de las acciones son determinísticos. El enfoque más efectivo para encontrar dichos planes es la búsqueda heurística, extrayendo de la representación del problema de forma automática heurísticas que guien la búsqueda. En esta tesis, introducimos enfoques alternativos para realizar inferencias sobre la estructura del los problemas de planificación, sin apelar a funciones heurísticas, reducciones a SAT o CSP. Demostramos que la mayoría de problemas estándares pueden ser resueltos casi sin búsqueda o con una cantidad de búsqueda polinomialmente limitada, en algunos casos, caracterizando la estructura de los problemas en término de un nuevo parámetro de complejidad para la planificación clásica.
Voss, Chelsea (Chelsea S. ). "A tool for automated inference in rule-based biological models". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106447.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 45-46).
Rule-based biological models help researchers investigate systems such as cellular signalling pathways. Although these models are generally programmed by hand, some research efforts aim to program them automatically using biological facts extracted from papers via natural language processing. However, NLP facts cannot always be directly converted into mechanistic reaction rules for a rule-based model. Thus, there is a need for tools that can convert biological facts into mechanistic rules in a logically sound way. We construct such a tool specifically for Kappa, a model programming language, by implementing Iota, a logic language for Kappa models. Our tool can translate biological facts into Iota predicates, check predicates for satisfiability, and find models that satisfy predicates. We test our system against realistic use cases, and show that it can construct rule-based mechanistic models that are sound with respect to the semantics of the biological facts from which they were constructed.
by Chelsea Voss.
M. Eng.
Raghavendra, Archana. "(Semi) automatic wrapper generation for production systems by knowledge inference". [Gainesville, Fla.] : University of Florida, 2001. http://purl.fcla.edu/fcla/etd/UFE0000345.
Pełny tekst źródłaTitle from title page of source document. Document formatted into pages; contains viii, 73 p.; also contains graphics. Includes vita. Includes bibliographical references.
Bhuiyan, Touhid. "Trust-based automated recommendation making". Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/49168/1/Touhid_Bhuiyan_Thesis.pdf.
Pełny tekst źródłaRybalka, A. I., A. S. Kutsenko i S. V. Kovalenko. "Modelling of an automated food quality assessment system based on fuzzy inference". Thesis, Харківський національний університет радіоелектроніки, 2020. http://openarchive.nure.ua/handle/document/14769.
Pełny tekst źródłaMarques, Henrique Costa. "An inference model with probabilistic ontologies to support automation in effects-based operations planning". Instituto Tecnológico de Aeronáutica, 2012. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=2190.
Pełny tekst źródłaGennari, Rosella. "Mapping Inferences: Constraint Propagation and Diamond Satisfaction". Diss., Universiteit van Amsterdam, 2002. http://hdl.handle.net/10919/71553.
Pełny tekst źródłaSiegel, Holger [Verfasser]. "Numeric Inference of Heap Shapes for the Automated Analysis of Heap-Allocating Programs / Holger Siegel". München : Verlag Dr. Hut, 2016. http://d-nb.info/108438521X/34.
Pełny tekst źródłaMorettin, Paolo. "Learning and Reasoning in Hybrid Structured Spaces". Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/264203.
Pełny tekst źródłaMorettin, Paolo. "Learning and Reasoning in Hybrid Structured Spaces". Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/264203.
Pełny tekst źródłaTEIXEIRA, TAIRO DOS PRAZERES. "A FUZZY INFERENCE SYSTEM WITH AUTOMATIC RULE EXTRACTION FOR GAS PATH DIAGNOSIS OF AVIATION GAS TURBINES". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28405@1.
Pełny tekst źródłaCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Turbinas a gás são equipamentos muito complexos e caros. No caso de falha em uma turbina, há obviamente perdas diretas, mas as indiretas são normalmente muito maiores, uma vez que tal equipamento é crítico para a operação de instalações industriais, aviões e veículos pesados. Portanto, é fundamental que turbinas a gás sejam providas com um sistema eficiente de monitoramento e diagnóstico. Isto é especialmente relevante no Brasil, cuja frota de turbinas tem crescido muito nos últimos anos, devido, principalmente, ao aumento do número de usinas termelétricas e ao crescimento da aviação civil. Este trabalho propõe um Sistema de Inferência Fuzzy (SIF) com extração automática de regras para diagnóstico de desempenho de turbinas a gás aeronáuticas. O sistema proposto faz uso de uma abordagem residual – medições da turbina real são comparadas frente a uma referência de turbina saudável – para tratamento dos dados brutos de entrada para os módulos de detecção e isolamento, que, de forma hierárquica, são responsáveis por detectar e isolar falhas em nível de componentes, sensores e atuadores. Como dados reais de falhas em turbinas a gás são de difícil acesso e de obtenção cara, a metodologia é validada frente a uma base de dados de falhas simuladas por um software especialista. Os resultados mostram que o SIF é capaz de detectar e isolar corretamente falhas, além de fornecer interpretabilidade linguística, característica importante no processo de tomada de decisão no contexto de manutenção.
A Gas turbine is a complex and expensive equipment. In case of a failure indirect losses are typically much larger than direct ones, since such equipment plays a critical role in the operation of industrial installations, aircrafts, and heavy vehicles. Therefore, it is vital that gas turbines be provided with an efficient monitoring and diagnostic system. This is especially relevant in Brazil, where the turbines fleet has risen substantially in recent years, mainly due to the increasing number of thermal power plants and to the growth of civil aviation. This work proposes a Fuzzy Inference System (FIS) with automatic rule extraction for gas path diagnosis. The proposed system makes use of a residual approach – gas path measurements are compared to a healthy engine reference – for preprocessing raw input data that are forwarded to the detection and isolation modules. These operate in a hierarchical manner and are responsible for fault detection and isolation in components, sensors and actuators. Since gas turbines failure data are difficult to access and expensive to obtain, the methodology is validated by using a database fault simulated by a specialist software. The results show that the SIF is able to correctly detect and isolate failures and to provide linguistic interpretability, which is an important feature in the decision-making process regarding maintenance.
Cura, Rémi. "Inverse procedural Street Modelling : from interactive to automatic reconstruction". Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1034/document.
Pełny tekst źródłaWorld urban population is growing fast, and so are cities, inducing an urgent need for city planning and management.Increasing amounts of data are required as cities are becoming larger, "Smarter", and as more related applications necessitate those data (planning, virtual tourism, traffic simulation, etc.).Data related to cities then become larger and are integrated into more complex city model.Roads and streets are an essential part of the city, being the interface between public and private space, and between urban usages.Modelling streets (or street reconstruction) is difficult because streets can be very different from each other (in layout, functions, morphology) and contain widely varying urban features (furniture, markings, traffic signs), at different scales.In this thesis, we propose an automatic and semi-automatic framework to model and reconstruct streets using the inverse procedural modelling paradigm.The main guiding principle is to generate a procedural generic model and then to adapt it to reality using observations.In our framework, a "best guess" road model is first generated from very little information (road axis network and associated attributes), that is available in most of national databases.This road model is then fitted to observations by combining in-base interactive user edition (using common GIS software as graphical interface) with semi-automated optimisation.The optimisation approach adapts the road model so it fits observations of urban features extracted from diverse sensing data.Both street generation (StreetGen) and interactions happen in a database server, as well as the management of large amount of street Lidar data (sensing data) as the observations using a Point Cloud Server.We test our methods on the entire Paris city, whose streets are generated in a few minutes, can be edited interactively (<0.3 s) by several concurrent users.Automatic fitting (few m) shows promising results (average distance to ground truth reduced from 2.0 m to 0.5m).In the future, this method could be mixed with others dedicated to reconstruction of buildings, vegetation, etc., so an affordable, precise, and up to date City model can be obtained quickly and semi-automatically.This will also allow to such models to be used in other application areas.Indeed, the possibility to have common, more generic, city models is an important challenge given the cost an complexity of their construction
El, Maadani Khalid. "Identification de systèmes séquentiels structurés : Application à la validation du test". Toulouse, INSA, 1993. http://www.theses.fr/1993ISAT0003.
Pełny tekst źródłaPernestål, Anna. "A Bayesian approach to fault isolation with application to diesel engine diagnosis". Licentiate thesis, KTH, School of Electrical Engineering (EES), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4294.
Pełny tekst źródłaUsers of heavy trucks, as well as legislation, put increasing demands on heavy trucks. The vehicles should be more comfortable, reliable and safe. Furthermore, they should consume less fuel and be more environmentally friendly. For example, this means that faults that cause the emissions to increase must be detected early. To meet these requirements on comfort and performance, advanced sensor-based computer control-systems are used. However, the increased complexity makes the vehicles more difficult for the workshop mechanic to maintain and repair. A diagnosis system that detects and localizes faults is thus needed, both as an aid in the repair process and for detecting and isolating (localizing) faults on-board, to guarantee that safety and environmental goals are satisfied.
Reliable fault isolation is often a challenging task. Noise, disturbances and model errors can cause problems. Also, two different faults may lead to the same observed behavior of the system under diagnosis. This means that there are several faults, which could possibly explain the observed behavior of the vehicle.
In this thesis, a Bayesian approach to fault isolation is proposed. The idea is to compute the probabilities, given ``all information at hand'', that certain faults are present in the system under diagnosis. By ``all information at hand'' we mean qualitative and quantitative information about how probable different faults are, and possibly also data which is collected during test drives with the vehicle when faults are present. The information may also include knowledge about which observed behavior that is to be expected when certain faults are present.
The advantage of the Bayesian approach is the possibility to combine information of different characteristics, and also to facilitate isolation of previously unknown faults as well as faults from which only vague information is available. Furthermore, Bayesian probability theory combined with decision theory provide methods for determining the best action to perform to reduce the effects from faults.
Using the Bayesian approach to fault isolation to diagnose large and complex systems may lead to computational and complexity problems. In this thesis, these problems are solved in three different ways. First, equivalence classes are introduced for different faults with equal probability distributions. Second, by using the structure of the computations, efficient storage methods can be used. Finally, if the previous two simplifications are not sufficient, it is shown how the problem can be approximated by partitioning it into a set of sub problems, which each can be efficiently solved using the presented methods.
The Bayesian approach to fault isolation is applied to the diagnosis of the gas flow of an automotive diesel engine. Data collected from real driving situations with implemented faults, is used in the evaluation of the methods. Furthermore, the influences of important design parameters are investigated.
The experiments show that the proposed Bayesian approach has promising potentials for vehicle diagnosis, and performs well on this real problem. Compared with more classical methods, e.g. structured residuals, the Bayesian approach used here gives higher probability of detection and isolation of the true underlying fault.
Både användare och lagstiftare ställer idag ökande krav på prestanda hos tunga lastbilar. Fordonen ska var bekväma, tillförlitliga och säkra. Dessutom ska de ha bättre bränsleekonomi vara mer miljövänliga. Detta betyder till exempel att fel som orsakar förhöjda emissioner måste upptäckas i ett tidigt stadium.
För att möta dessa krav på komfort och prestanda används avancerade sensorbaserade reglersystem.
Emellertid leder den ökade komplexiteten till att fordonen blir mer komplicerade för en mekaniker att underhålla, felsöka och reparera.
Därför krävs det ett diagnossystem som detekterar och lokaliserar felen, både som ett hjälpmedel i reparationsprocessen, och för att kunna detektera och lokalisera (isolera) felen ombord för att garantera att säkerhetskrav och miljömål är uppfyllda.
Tillförlitlig felisolering är ofta en utmanande uppgift. Brus, störningar och modellfel kan orsaka problem. Det kan också det faktum två olika fel kan leda till samma observerade beteende hos systemet som diagnosticeras. Detta betyder att det finns flera fel som möjligen skulle kunna förklara det observerade beteendet hos fordonet.
I den här avhandlingen föreslås användandet av en Bayesianska ansats till felisolering. I metoden beräknas sannolikheten för att ett visst fel är närvarande i det diagnosticerade systemet, givet ''all tillgänglig information''. Med ''all tillgänglig information'' menas både kvalitativ och kvantitativ information om hur troliga fel är och möjligen även data som samlats in under testkörningar med fordonet, då olika fel finns närvarande. Informationen kan även innehålla kunskap om vilket beteende som kan förväntas observeras då ett särskilt fel finns närvarande.
Fördelarna med den Bayesianska metoden är möjligheten att kombinera information av olika karaktär, men också att att den möjliggör isolering av tidigare okända fel och fel från vilka det endast finns vag information tillgänglig. Vidare kan Bayesiansk sannolikhetslära kombineras med beslutsteori för att erhålla metoder för att bestämma nästa bästa åtgärd för att minska effekten från fel.
Användandet av den Bayesianska metoden kan leda till beräknings- och komplexitetsproblem. I den här avhandlingen hanteras dessa problem på tre olika sätt. För det första så introduceras ekvivalensklasser för fel med likadana sannolikhetsfördelningar. För det andra, genom att använda strukturen på beräkningarna kan effektiva lagringsmetoder användas. Slutligen, om de två tidigare förenklingarna inte är tillräckliga, visas det hur problemet kan approximeras med ett antal delproblem, som vart och ett kan lösas effektivt med de presenterade metoderna.
Den Bayesianska ansatsen till felisolering har applicerats på diagnosen av gasflödet på en dieselmotor. Data som har samlats från riktiga körsituationer med fel implementerade används i evalueringen av metoderna. Vidare har påverkan av viktiga parametrar på isoleringsprestandan undersökts.
Experimenten visar att den föreslagna Bayesianska ansatsen har god potential för fordonsdiagnos, och prestandan är bra på detta reella problem. Jämfört med mer klassiska metoder baserade på strukturerade residualer ger den Bayesianska metoden högre sannolikhet för detektion och isolering av det sanna, underliggande, felet.
Surovič, Marek. "Statická detekce malware nad LLVM IR". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255427.
Pełny tekst źródłaAhnlén, Fredrik. "Automatic Detection of Low Passability Terrain Features in the Scandinavian Mountains". Thesis, KTH, Geodesi och satellitpositionering, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254709.
Pełny tekst źródłaDe senaste åren har mycket fokus lagts på att ersätta tidskrävande manuella karterings- och klassificeringsmetodermed automatiserade lösningar med minimal mänsklig inverkan. Det är numeramöjligt att digitalt klassificera marktäcket och terrängföremål över stora områden, snabbt och medhög noggrannhet. Detta med hjälp av enbart fjärranalys, vilket medför en betydligt mer hållbarprocess och slutprodukt. Trots det finns det fortfarande terrängföremål som inte har en etableradmetod för noggrann automatisk kartering.Den skandinaviska fjällkedjan består till stor del av svårpasserade terrängföremål som sankmarker,videsnår och stenig mark. Alla som tar sig fram i terrängen obanat skulle ha nytta av attkunna undvika dessa områden men de är i nuläget inte karterade med önskvärd noggrannhet.Målet med denna analys var att utforma en metod för att klassificera och kartera dessa terrängföremåli Skanderna, med hög noggrannhet och minimal mänsklig inverkan med hjälp avfjärranalys. Valet av testområde för analysen är en större dal och bergssida sydost om Abisko inorra Sverige som innehåller tydliga exemplar av alla berörda terrängföremål. Metoden baseradespå att träna en Fuzzy Logic classifier med manuellt utvald träningsdata och deskriptorer,valda för att bäst separera klasserna utifrån deras karaktärsdrag. Inledningsvis valdes en uppsättningav kandidatdeskriptorer som sedan filtrerades till den slutgiltiga uppsättningen med hjälp avett Fisher score filter. Ett Fuzzy Inference System byggdes och tränades med träningsdata fråndeskriptorerna vilket slutligen användes för att klassificera hela testområdet pixelvis. Det klassificeraderesultatet klustrades därefter med hjälp av ett majoritetsfilter. Resultatet validerades genomvisuell inspektion, jämförelse med befintliga kartprodukter och genom confusion matriser, vilkaberäknades både för träningsdata och valideringsdata samt för det klustrade och icke-klustraderesultatet.Resultatet visade att de svårpasserade terrängföremålen sankmark, videsnår och stenig markkan karteras med hög noggrannhet med hjälp denna metod och att resultaten generellt är tydligtbättre än nuvarande kartprodukter. Däremot kan metoden finjusteras på flera plan för att optimeras.Bland annat genom att implementera deskriptorer för markvattenrörelser och användandeav LiDAR med högre spatial upplösning, samt med ett mer fulltäckande och spritt val av klasser.
Sun, Wenzhe. "Bus Bunching Prediction and Transit Route Demand Estimation Using Automatic Vehicle Location Data". Kyoto University, 2020. http://hdl.handle.net/2433/253498.
Pełny tekst źródłaBossert, Georges. "Exploiting Semantic for the Automatic Reverse Engineering of Communication Protocols". Thesis, Supélec, 2014. http://www.theses.fr/2014SUPL0027/document.
Pełny tekst źródłaThis thesis exposes a practical approach for the automatic reverse engineering of undocumented communication protocols. Current work in the field of automated protocol reverse engineering either infer incomplete protocol specifications or require too many stimulation of the targeted implementation with the risk of being defeated by counter-inference techniques. We propose to tackle these issues by leveraging the semantic of the protocol to improve the quality, the speed and the stealthiness of the inference process. This work covers the two main aspects of the protocol reverse engineering, the inference of its syntactical definition and of its grammatical definition. We propose an open-source tool, called Netzob, that implements our work to help security experts in their work against latest cyber-threats. We claim Netzob is the most advanced published tool that tackles issues related to the reverse engineering and the simulation of undocumented protocols
Zhao, Jinhua 1977. "The planning and analysis implications of automated data collection systems : rail transit OD matrix inference and path choice modeling examples". Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28752.
Pełny tekst źródłaIncludes bibliographical references (leaf 124).
(cont.) by presenting two case studies both in the context of the Chicago Transit Authority. One study proposes an enhanced method of inferring the rail trip OD matrix from an origin-only AFC system to replace the routine passenger survey. The proposed algorithm takes advantage of the pattern of a person's consecutive transit trip segments. In particular the study examines the rail-to-bus case (which is ignored by prior studies) by integrating AFC and AVL data and utilizing GIS and DBMS technologies. A software tool is developed to facilitate the implementation of the algorithm. The other study is of rail path choice, which employs the Logit and Mixed Logit models to examine revealed public transit riders' travel behavior based on the inferred OD matrix and the transit network attributes. This study is based on two data sources: the rail trip OD matrix inferred in the first case study and the attributes of alternative paths calculated from a network representation in Trans CAD. This study demonstrates that a rigorous traveler behavior analysis can be performed based on the data source from ADC systems. Both cases illustrate the potential as well as the difficulty of utilizing these systems and more importantly demonstrate that at relatively low marginal cost, ADC systems can provide transit agencies with a rich information source to support decision making. The impact of a new data collection strategy ...
Transit agencies in U.S. are on the brink of a major change in the way they make many critical planning decisions. Until recently transit agencies have lacked the data and the analysis techniques needed to make informed decisions in both long-term planning and day-to-day operations. Now these agencies are entering an era in which a large volume of raw data will be available due to the implementation of ITS technology including Automated Data Collection systems (ADC), such as Automated Fare Collection systems (AFC), Automated Vehicle Location systems (AVL), and Automatic Passenger Counting systems (APC). Automated Data Collection systems have distinct advantages over the traditional data collection methods: large temporal and spatial coverage, continuous data flow and currency, low marginal cost, accuracy, automatic collection and central storage, etc. Thanks to these unique features, there exists a great potential for ADC systems to be used to support decision-making in transit agencies. However, effectively utilizing ADC systems data is not straightforward. Several examples are given to illustrate that there is a critical gap between what ADC systems directly offer and what is needed practically in public transit agencies' decision-making practice. Meanwhile, the framework of data processing and analysis is not readily available, and transit agencies generally lack needed qualified staff. As a consequence, these data sources have not yet been effectively utilized in practice. A strong foundation of ADC data manipulation, analysis methodologies and techniques with support of advanced technologies such DBMS and GIS is required before the full value of the new data source can be exploited. This research is an initial attempt to lay out such a framework
by Jinhua Zhao.
S.M.in Transportation
M.C.P.
Aho, P. (Pekka). "Automated state model extraction, testing and change detection through graphical user interface". Doctoral thesis, Oulun yliopisto, 2019. http://urn.fi/urn:isbn:9789526224060.
Pełny tekst źródłaTiivistelmä Testaaminen on tärkeä osa laadun varmistusta. Ketterät kehitysprosessit ja jatkuva integrointi lisäävät tarvetta automatisoida kaikki testauksen osa-alueet. Testaus graafisten käyttöliittymien kautta automatisoidaan yleensä skripteinä, jotka luodaan joko tallentamalla manuaalista testausta tai kirjoittamalla käyttäen skriptieditoria. Tällöin scriptit automatisoivat testitapausten suorittamista. Muutokset graafisessa käyttöliittymässä vaativat scriptien päivittämistä ja scriptien ylläpitoon kuluva työmäärä on iso ongelma. Mallipohjaisessa testauksessa automatisoidaan testien suorittamisen lisäksi myös testitapausten suunnittelu. Perinteisesti mallipohjaisessa testauksessa mallit suunnitellaan manuaalisesti käyttämällä mallinnustyökalua, ja mallista luodaan abstrakteja testitapauksia automaattisesti mallipohjaisen testauksen työkalun avulla. Sen jälkeen implementoidaan adapteri, joka muuttaa abstraktit testitapaukset konkreettisiksi, jotta ne voidaan suorittaa testattavassa järjestelmässä. Kun testattava graafinen käyttöliittymä muuttuu, vain mallia täytyy päivittää ja testitapaukset voidaan luoda automaattisesti uudelleen, vähentäen ylläpitoon käytettävää työmäärää. Mallien suunnittelu ja adapterien implementointi vaatii kuitenkin huomattavan työmäärän ja erikoisosaamista. Tämä väitöskirja tutkii 1) voidaanko tilamalleja luoda automaattisesti järjestelmistä, joissa on graafinen käyttöliittymä, ja 2) voidaanko automaattisesti luotuja tilamalleja käyttää testauksen automatisointiin. Tutkimus keskittyy työpöytäsovelluksiin ja dynaamisen analyysin käyttämiseen graafisen käyttöliittymän kautta järjestelmän automatisoidun läpikäynnin aikana. Tutkimustulokset osoittavat, että tilamallien automaattinen luominen graafisen käyttöliittymän kautta on mahdollista, ja malleja voidaan käyttää testitapausten generointiin regressiotestauksessa. Lupaavampi lähestymistapa on kuitenkin vertailla malleja, jotka on luotu järjestelmän peräkkäisistä versioista, ja havaita versioiden väliset muutokset automaattisesti
Durand, William. "Automated test generation for production systems with a model-based testing approach". Thesis, Clermont-Ferrand 2, 2016. http://www.theses.fr/2016CLF22691/document.
Pełny tekst źródłaThis thesis tackles the problem of testing (legacy) production systems such as those of our industrial partner Michelin, one of the three largest tire manufacturers in the world, by means of Model-based Testing. A production system is defined as a set of production machines controlled by a software, in a factory. Despite the large body of work within the field of Model-based Testing, a common issue remains the writing of models describing either the system under test or its specification. It is a tedious task that should be performed regularly in order to keep the models up to date (which is often also true for any documentation in the Industry). A second point to take into account is that production systems often run continuously and should not be disrupted, which limits the use of most of the existing classical testing techniques. We present an approach to infer exact models from traces, i.e. sequences of events observed in a production environment, to address the first issue. We leverage the data exchanged among the devices and software in a black-box perspective to construct behavioral models using different techniques such as expert systems, model inference, and machine learning. It results in large, yet partial, models gathering the behaviors recorded from a system under analysis. We introduce a context-specific algorithm to reduce such models in order to make them more usable while preserving trace equivalence between the original inferred models and the reduced ones. These models can serve different purposes, e.g., generating documentation, data mining, but also testing. To address the problem of testing production systems without disturbing them, this thesis introduces an offline passive Model-based Testing technique, allowing to detect differences between two production systems. This technique leverages the inferred models, and relies on two implementation relations: a slightly modified version of the existing trace preorder relation, and a weaker implementation proposed to overcome the partialness of the inferred models.Overall, the thesis presents Autofunk, a modular framework for model inference and testing of production systems, gathering the previous notions. Its Java implementation has been applied to different applications and production systems at Michelin, and this thesis gives results from different case studies. The prototype developed during this thesis should become a standard tool at Michelin
Gordon, Jason B. (Jason Benjamin). "Intermodal passenger flows on London's public transport network : automated inference of full passenger journeys using fare-transaction and vehicle-location data". Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/78242.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 147-155).
Urban public transport providers have historically planned and managed their networks and services with limited knowledge of their customers' travel patterns. While ticket gates and bus fareboxes yield counts of passenger activity in specific stations and vehicles, the relationships between these transactions-the origins, interchanges, and destinations of individual passengers-have typically been acquired only through costly and therefore small and infrequent rider surveys. Building upon recent work on the utilization of automated fare-collection and vehicle-location systems for passenger-behavior analysis, this thesis presents methods for inferring the full journeys of all riders on a large public transport network. Using complete daily sets of data from London's Oyster farecard and iBus vehicle-location system, boarding and alighting times and locations are inferred for individual bus passengers, interchanges are inferred between passenger trips of various public modes, and full-journey origin-interchange-destination matrices are constructed, which include the estimated flows of non-farecard passengers. The outputs are validated against surveys and traditional origin-destination matrices, and the software implementation demonstrates that the procedure is efficient enough to be performed daily, enabling transport providers to observe travel behavior on all services at all times.
by Jason B. Gordon.
S.M.in Transportation
M.C.P.
Kazakov, Mikhaïl. "A Methodology of semi-automated software integration : an approach based on logical inference. Application to numerical simulation solutions of Open CASCADE". INSA de Rouen, 2004. http://www.theses.fr/2004ISAM0001.
Pełny tekst źródłaZheng, Ning. "Discovering interpretable topics in free-style text diagnostics, rare topics, and topic supervision /". Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1199237529.
Pełny tekst źródłaLopes, Victor Dias. "Proposta de integração entre tecnologias adaptativas e algoritmos genéticos". Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-01072009-133614/.
Pełny tekst źródłaThis work is an initial study about the integration of two computing engineering areas, the adaptive technologies and the genetic algorithms. For that, it was per- formed the application of genetic algorithms for the adaptive automata inference. Several techniques were studied and proposed along the algorithm implementation, always seeking for more satisfying results. Both technologies, genetic algorithm and adaptive technology, hold very strong adaptive features, however, with very di®erent characteristics in the way they are implemented and executed. The inferences, proposed in this work, were performed with success, so that the techniques described may be employed in aid tools for designers of such de- vices. Tools that may be useful due to the complexity involved in the development of an adaptive automaton. Through this genetic algorithm application, observing how automata evolved during the algorithm execution, we believe that it was obtained a better under- standing about the adaptive automaton structure and how those two, so impor- tant, technologies can be integrated.
Galvanin, Edinéia Aparecida dos Santos. "Extração automática de contornos de telhados de edifícios em um modelo digital de elevação, utilizando inferência Bayesiana e campos aleatórios de Markov /". Presidente Prudente : [s.n.], 2007. http://hdl.handle.net/11449/100258.
Pełny tekst źródłaBanca: Nilton Nobuhiro Imai
Banca: Maurício Galo
Banca: Edson Aparecido Mitishita
Resumo: As metodologias para a extração automática de telhados desempenham um papel importante no contexto de aquisição de informação espacial para Sistemas de Informação Geográficas (SIG). Neste sentido, este trabalho propõe uma metodologia para extração automática de contornos de telhado de edifícios utilizando dados de varredura a laser. A metodologia baseiase em duas etapas principais: 1) Extração de regiões altas (edifícios, árvores etc.) de um Modelo Digital de Elevação (MDE) gerado a partir dos dados laser; 2) Extração das regiões altas que correspondem a contornos de telhados. Na primeira etapa são utilizadas as técnicas de divisão recursiva, via estrutura quadtree e de fusão Bayesiana de regiões considerando Markov Random Field (MRF). Inicialmente a técnica de divisão recursiva é usada para particionar o MDE em regiões homogêneas. No entanto, devido a ligeiras diferenças de altura no MDE, nesta etapa a fragmentação das regiões pode ser relativamente alta. Para minimizar essa fragmentação, a técnica de fusão Bayesiana de regiões é aplicada nos dados segmentados. Utiliza-se para tanto um modelo hierárquico, cujas alturas médias das regiões dependem de uma média geral e de um efeito aleatório, que incorpora a relação de vizinhança entre elas. A distribuição a priori para o efeito aleatório é especificada como um modelo condicional auto-regressivo (CAR). As distribuições a posteriori para os parâmetros de interesse foram obtidas utilizando o Amostrador de Gibbs. Na segunda etapa os contornos de telhados são identificados entre todos os objetos altos extraídos na etapa anterior. Levando em conta algumas propriedades de telhados e as medidas de alguns atributos (por exemplo, área, retangularidade, ângulos entre eixos principais de objetos) é construída uma função de energia a partir do modelo MRF.
Abstract: Methodologies for automatic building roof extraction are important in the context of spatial information acquisition for geographical information systems (GIS). Thus, this work proposes a methodology for automatic extraction of building roof contour from laser scanning data. The methodology is based on two stages: 1) Extraction of high regions (buildings, trees etc.) from a Digital Elevation Model (DEM) derived from laser scanning data; 2) Building roof contour extraction. In the first stage is applied the recursive splitting technique using the quadtree structure followed by a Bayesian merging technique considering Markov Random Field (MRF) model. The recursive splitting technique subdivides the DEM into homogeneous regions. However, due to slight height differences in the DEM, in this stage the region fragmentation can be relatively high. In order to minimize the fragmentation, a region merging technique based on the Bayesian framework is applied to the previously segmented data. Thus, a hierarchical model is proposed, whose height values in the data depend on a general mean plus a random effect. The prior distribution for the random effects is specified by the Conditional Autoregressive (CAR) model. The posterior probability distributions are obtained by the Gibbs sampler. In the second stage the building roof contours are identified among all high objects extracted previously.
Doutor
Furlong, Vitor Badiale. "Automation of a reactor for enzymatic hydrolysis of sugar cane bagasse : Computational intelligencebased adaptive control". Universidade Federal de São Carlos, 2015. https://repositorio.ufscar.br/handle/ufscar/7394.
Pełny tekst źródłaApproved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-09-23T18:23:48Z (GMT) No. of bitstreams: 1 DissVBF.pdf: 4418595 bytes, checksum: aaae3efb173c8760a1039251a31ea973 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-09-23T18:24:01Z (GMT) No. of bitstreams: 1 DissVBF.pdf: 4418595 bytes, checksum: aaae3efb173c8760a1039251a31ea973 (MD5)
Made available in DSpace on 2016-09-23T18:24:10Z (GMT). No. of bitstreams: 1 DissVBF.pdf: 4418595 bytes, checksum: aaae3efb173c8760a1039251a31ea973 (MD5) Previous issue date: 2015-03-20
Não recebi financiamento
The continuous demand growth for liquid fuels, alongside with the decrease of fossil oil reserves, unavoidable in the long term, induces investigations for new energy sources. A possible alternative is the use of bioethanol, produced by renewable resources such as sugarcane bagasse. Two thirds of the cultivated sugarcane biomass are sugarcane bagasse and leaves, not fermentable when the current, first-generation (1G) process is used. A great interest has been given to techniques capable of utilizing the carbohydrates from this material. Among them, production of second generation (2G) ethanol is a possible alternative. 2G ethanol requires two additional operations: a pretreatment and a hydrolysis stage. Regarding the hydrolysis, the dominant technical solution has been based on the use of enzymatic complexes to hydrolyze the lignocellulosic substrate. To ensure the feasibility of the process, a high final concentration of glucose after the enzymatic hydrolysis is desirable. To achieve this objective, a high solid consistency in the reactor is necessary. However, a high load of solids generates a series of operational difficulties within the reactor. This is a crucial bottleneck of the 2G process. A possible solution is using a fed-batch process, with feeding profiles of enzymes and substrate that enhance in the process yield and productivity. The main objective of this work was to implement and test a system to infer online concentrations of fermentable carbohydrates in the reactive system, and to optimize the feeding strategy of substrate and/or enzymatic complex, according to a model-based control strategy. Batch and fed-batch experiments were conducted in order to test the adherence of four simplified kinetic models. The model with best adherence to the experimental data (a modified Michaelis-Mentem model with inhibition by the product) was used to train an Artificial Neural Network (ANN) as a softsensor to predict glucose concentrations. Further, this ANN may be used in a closedloop control strategy. A feeding profile optimizer was implemented, based on the optimal control approach. The ANN was capable of inferring the product concentration from the available data with good adherence (Determination Coefficient of 0.972). The optimization algorithm generated profiles that increased a process performance index while maintaining operational levels within the reactor, reaching glucose concentrations close to those utilized in current first generation technology a (ranging between 156.0 g.L⁻¹ and 168.3 g.L⁻¹). However rough estimates for scaling up the reactor to industrial dimensions indicate that this conventional reactor design must be replaced by a two-stage reactor, to minimize the volume of liquid to be stirred.
A crescente demanda por combustíveis líquidos, bem como a diminuição das reservas de petróleo, inevitáveis a longo prazo, induzem pesquisas por novas fontes de energia. Uma possível solução é o uso do bioetanol, produzido de resíduos, como o bagaço de cana-deaçúcar. Dois terços da biomassa cultivada são bagaço e folhas. Estas frações não são fermentescíveis quando se usa a tecnologia de primeira geração atual (1G). Um grande interesse vem sendo prestado a técnicas capazes de utilizar os carboidratos deste material. Dentre elas, a produção de etanol de segunda geração (2G) é uma possível alternativa. Etanol 2G requer duas operações adicionais: etapas de pré-tratamento e hidrólise. Considerando a hidrólise, a técnica dominante tem sido a utilização de complexos enzimáticos para hidrolisar o substrato lignocelulósico. Para assegurar a viabilidade do processo, uma alta concentração final de glicose é necessária ao final do processo. Para atingir esse objetivo, uma alta concentração de sólidos no reator é necessária. No entanto, uma carga grande de sólidos gera uma série de dificuldades operacionais para o processo. Este é um gargalo crucial do processo 2G. Uma possível solução é utilizar um processo de batelada alimentada, com perfis de alimentação de enzima e substrato para aumentar produtividade e rendimento. O principal objetivo deste trabalho é implementar e testar um sistema para inferir concentração de carboidratos fermentescíveis automaticamente e otimizar a política de substrato e/ou enzima em tempo real, de acordo com uma estratégia de controle baseada em modelo cinético. Experimentos de batelada e batelada alimentada foram realizados a fim de testar a aderência de 4 modelos cinéticos simplificados. O modelo com melhor aderência aos dados experimentais (um modelo de Michaelis-Mentem modificado com inibição por produto) foi utilizado para gerar dados a fim de treinar uma rede neural artificial para predizer concentrações de glicose automaticamente. Em estudos futuros, esta rede pode ser utilizada para compor o fechamento da malha de controle. Um otimizador de perfil de alimentação foi implementado, este foi baseado em uma abordagem de controle ótimo. A rede neural foi capaz de predizer a concentração de produto com os dados disponíveis de maneira satisfatória (Coeficiente de Determinação de 0.972). O algoritmo de otimização gerou perfis que aumentaram a performance do processo enquanto manteve as condições da hidrólise dentro de níveis operacionais, e gerou concentrações de glicose próximas as obtidas pelo caldo de cana-de-açúcar da primeira geração (valores entre 156.0 g.L ¹ e 168.3 g.L ¹). No entanto, estimativas iniciais de ⁻ ⁻ aumento de escala do processo demonstraram que para atingir dimensões industriais o projeto do reator utilizado deve ser analisado, substituindo o mesmo por um processo em dois estágios para diminuir o volume do reator e energia para agitação.
Sandillon, Rezer Noémie Fleur. "Apprentissage de grammaires catégorielles : transducteurs d’arbres et clustering pour induction de grammaires catégorielles". Thesis, Bordeaux 1, 2013. http://www.theses.fr/2013BOR14940/document.
Pełny tekst źródłaNowadays, we have become familiar with software interacting with us using natural language (for example in question-answering systems for after-sale services, human-computer interaction or simple discussion bots). These tools have to either react by keyword extraction or, more ambitiously, try to understand the sentence in its context. Though the simplest of these programs only have a set of pre-programmed sentences to react to recognized keywords (these systems include Eliza but also more modern systems like Siri), more sophisticated systems make an effort to understand the structure and the meaning of sentences (these include systems like Watson), allowing them to generate consistent answers, both with respect to the meaning of the sentence (semantics) and with respect to its form (syntax). In this thesis, we focus on syntax and on how to model syntax using categorial grammars. Our goal is to generate syntactically accurate sentences (without the semantic aspect) and to verify that a given sentence belongs to a language - the French language. We note that AB grammars, with the exception of some phenomena like quantification or extraction, are also a good basis for semantic purposes. We cover both grammar extraction from treebanks and parsing using the extracted grammars. On this purpose, we present two extraction methods and test the resulting grammars using standard parsing algorithms. The first method focuses on creating a generalized tree transducer, which transforms syntactic trees into derivation trees corresponding to an AB grammar. Applied on the various French treebanks, the transducer’s output gives us a wide-coverage lexicon and a grammar suitable for parsing. The transducer, even if it differs only slightly from the usual definition of a top-down transducer, offers several new, compact ways to express transduction rules. We currently transduce 92.5% of all sen- tences in the treebanks into derivation trees.For our second method, we use a unification algorithm, guiding it with a preliminary clustering step, which gathers the words according to their context in the sentence. The comparision between the transduced trees and this method gives the promising result of 91.3% of similarity.Finally, we have tested our grammars on sentence analysis with a probabilistic CYK algorithm and a formula assignment step done with a supertagger. The obtained coverage lies between 84.6% and 92.6%, depending on the input corpus. The probabilities, estimated for the type of words and for the rules, enable us to select only the “best” derivation tree. All our software is available for download under GNU GPL licence
Galvanin, Edinéia Aparecida dos Santos [UNESP]. "Extração automática de contornos de telhados de edifícios em um modelo digital de elevação, utilizando inferência Bayesiana e campos aleatórios de Markov". Universidade Estadual Paulista (UNESP), 2007. http://hdl.handle.net/11449/100258.
Pełny tekst źródłaCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
As metodologias para a extração automática de telhados desempenham um papel importante no contexto de aquisição de informação espacial para Sistemas de Informação Geográficas (SIG). Neste sentido, este trabalho propõe uma metodologia para extração automática de contornos de telhado de edifícios utilizando dados de varredura a laser. A metodologia baseiase em duas etapas principais: 1) Extração de regiões altas (edifícios, árvores etc.) de um Modelo Digital de Elevação (MDE) gerado a partir dos dados laser; 2) Extração das regiões altas que correspondem a contornos de telhados. Na primeira etapa são utilizadas as técnicas de divisão recursiva, via estrutura quadtree e de fusão Bayesiana de regiões considerando Markov Random Field (MRF). Inicialmente a técnica de divisão recursiva é usada para particionar o MDE em regiões homogêneas. No entanto, devido a ligeiras diferenças de altura no MDE, nesta etapa a fragmentação das regiões pode ser relativamente alta. Para minimizar essa fragmentação, a técnica de fusão Bayesiana de regiões é aplicada nos dados segmentados. Utiliza-se para tanto um modelo hierárquico, cujas alturas médias das regiões dependem de uma média geral e de um efeito aleatório, que incorpora a relação de vizinhança entre elas. A distribuição a priori para o efeito aleatório é especificada como um modelo condicional auto-regressivo (CAR). As distribuições a posteriori para os parâmetros de interesse foram obtidas utilizando o Amostrador de Gibbs. Na segunda etapa os contornos de telhados são identificados entre todos os objetos altos extraídos na etapa anterior. Levando em conta algumas propriedades de telhados e as medidas de alguns atributos (por exemplo, área, retangularidade, ângulos entre eixos principais de objetos) é construída uma função de energia a partir do modelo MRF.
Methodologies for automatic building roof extraction are important in the context of spatial information acquisition for geographical information systems (GIS). Thus, this work proposes a methodology for automatic extraction of building roof contour from laser scanning data. The methodology is based on two stages: 1) Extraction of high regions (buildings, trees etc.) from a Digital Elevation Model (DEM) derived from laser scanning data; 2) Building roof contour extraction. In the first stage is applied the recursive splitting technique using the quadtree structure followed by a Bayesian merging technique considering Markov Random Field (MRF) model. The recursive splitting technique subdivides the DEM into homogeneous regions. However, due to slight height differences in the DEM, in this stage the region fragmentation can be relatively high. In order to minimize the fragmentation, a region merging technique based on the Bayesian framework is applied to the previously segmented data. Thus, a hierarchical model is proposed, whose height values in the data depend on a general mean plus a random effect. The prior distribution for the random effects is specified by the Conditional Autoregressive (CAR) model. The posterior probability distributions are obtained by the Gibbs sampler. In the second stage the building roof contours are identified among all high objects extracted previously.
Vitorino, dos Santos Filho Jairson. "CHROME: a model-driven component-based rule engine". Universidade Federal de Pernambuco, 2009. https://repositorio.ufpe.br/handle/123456789/1638.
Pełny tekst źródłaCoordenação de Aperfeiçoamento de Pessoal de Nível Superior
Vitorino dos Santos Filho, Jairson; Pierre Louis Robin, Jacques. CHROME: a model-driven component-based rule engine. 2009. Tese (Doutorado). Programa de Pós-Graduação em Ciência da Computação, Universidade Federal de Pernambuco, Recife, 2009.
Chatalic, Philippe. "Raisonnement deductif en presence de connaissances imprecises et incertaines : un systeme base sur la theorie de dempster-shafer". Toulouse 3, 1986. http://www.theses.fr/1986TOU30189.
Pełny tekst źródłaMaddali, Hanuma Teja. "Inferring social structure and dominance relationships between rhesus macaques using RFID tracking data". Thesis, Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51866.
Pełny tekst źródłaRusinowitch, Michaël. "Démonstration automatique par des techniques de réécritures". Nancy 1, 1987. http://www.theses.fr/1987NAN10358.
Pełny tekst źródłaSingh, Vidisha. "Integrative analysis and modeling of molecular pathways dysregulated in rheumatoid arthritis Computational systems biology approach for the study of rheumatoid arthritis: from a molecular map to a dynamical model RA-map: building a state-of-the-art interactive knowledge base for rheumatoid arthritis Automated inference of Boolean models from molecular interaction maps using CaSQ". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASL039.
Pełny tekst źródłaRheumatoid arthritis (RA) is a complexautoimmune disease that results in synovial inflammationand hyperplasia leading to bone erosion and cartilagedestruction in the joints. The aetiology of RA remainspartially unknown, yet, it involves a variety of intertwinedsignalling cascades and the expression of pro-inflammatorymediators. In the first part of my PhD project, we present asystematic effort to construct a fully annotated, expertvalidated, state of the art knowledge-base for RA. The RAmap illustrates significant molecular and signallingpathways implicated in the disease. Signal transduction isdepicted from receptors to the nucleus systematically usingthe systems biology graphical notation (SBGN) standardrepresentation. Manual curation based on strict criteria andrestricted to only human-specific studies limits theoccurrence of false positives in the map. The RA map canserve as an interactive knowledge base for the disease butalso as a template for omic data visualization and as anexcellent base for the development of a computationalmodel. The static nature of the RA map could provide arelatively limited understanding of the emerging behaviorof the system under different conditions. Computationalmodeling can reveal dynamic network properties throughin silico perturbations and can be used to test and predictassumptions.In the second part of the project, we present a pipelineallowing the automated construction of a large Booleanmodel, starting from a molecular interaction map. For thispurpose, we developed the tool CaSQ (CellDesigner asSBML-qual), which automates the conversion ofmolecular maps to executable Boolean models based ontopology and map semantics. The resulting Booleanmodel could be used for in silico simulations to reproduceknown biological behavior of the system and to furtherpredict novel therapeutic targets. For benchmarking, weused different disease maps and models with a focus onthe large molecular map for RA.In the third part of the project we present our efforts tocreate a large scale dynamical (Boolean) model forrheumatoid arthritis fibroblast-like synoviocytes (RAFLS).Among many cells of the joint and of the immunesystem involved in the pathogenesis of RA, RA FLS playa significant role in the initiation and perpetuation ofdestructive joint inflammation. RA-FLS are shown toexpress immuno-modulating cytokines, adhesionmolecules, and matrix-modelling enzymes. Moreover,RA-FLS display high proliferative rates and an apoptosisresistantphenotype. RA-FLS can also behave as primarydrivers of inflammation, and RA FLS-directed therapiescould become a complementary approach to immunedirectedtherapies. The challenge is to predict the optimalconditions that would favour RA FLS apoptosis, limitinflammation, slow down the proliferation rate andminimize bone erosion and cartilage destruction
Malmgren, Henrik. "Revision of an artificial neural network enabling industrial sorting". Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392690.
Pełny tekst źródłaKrtek, Lukáš. "Učení jazykových obrázků pomocí restartovacích automatů". Master's thesis, 2014. http://www.nusl.cz/ntk/nusl-335550.
Pełny tekst źródłaKovářová, Lenka. "Testování učení restartovacích automatů genetickými algoritmy". Master's thesis, 2012. http://www.nusl.cz/ntk/nusl-313874.
Pełny tekst źródłaMcAllester, David. "Automatic Recognition of Tractability in Inference Relations". 1990. http://hdl.handle.net/1721.1/6528.
Pełny tekst źródła