To see the other types of publications on this topic, follow the link: Automatic reasoning.

Dissertations / Theses on the topic 'Automatic reasoning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Automatic reasoning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Robertson, Neil. "Automatic causal reasoning for video surveillance." Thesis, University of Oxford, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.432567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Urbas, Matej. "Mechanising heterogeneous reasoning in theorem provers." Thesis, University of Cambridge, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Khoshnevisan-Tehrani, Hassam. "Automatic transformation systems based on function-level reasoning." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/46937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Eliassen, Lars Moland. "Automatic Fish Classification : Using Image Processing and Case-Based Reasoning." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18457.

Full text
Abstract:
Counting and classifying fish moving upstream in rivers to spawn is a useful way of monitoring the population of different species. Today, there exist some commercial solutions, along with some research that addresses the area. Case-based reasoning is a process that can be used to solve new problems based on previous problems. This thesis studies the possibilities of combining image processing techniques and case-based reasoning to classify species of fish which are similar to each other in both shape, size and color. Methods for image preprocessing are discussed, and tested. Methods for feature extraction and a case-based reasoning prototype are proposed, implemented and tested with promising results.
APA, Harvard, Vancouver, ISO, and other styles
5

Woodbury, Charla Jean. "Automatic Extraction From and Reasoning About Genealogical Records: A Prototype." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2335.

Full text
Abstract:
Family history research on the web is increasing in popularity, and many competing genealogical websites host large amounts of data-rich, unstructured, primary genealogical records. It is labor-intensive, however, even after making these records machine-readable, for humans to make these records easily searchable. What we need are computer tools that can automatically produce indices and databases from these genealogical records and can automatically identify individuals and events, determine relationships, and put families together. We propose here a possible solution—specialized ontologies, built specifically for extracting information from primary genealogical records, with expert logic and rules to infer genealogical facts and assemble relationship links between persons with respect to the genealogical events in their lives. The deliverables of this solution are extraction ontologies that can extract from parish or town records, annotated versions of original documents, data files of individuals and events, and rules to infer family relationships from stored data. The solution also provides for the ability to query over the rules and data files and to obtain query-result justification linking back to primary genealogical records. An evaluation of the prototype solution shows that the extraction has excellent recall and precision results and that inferred facts are correct.
APA, Harvard, Vancouver, ISO, and other styles
6

Low, Harold William Capen IV. "Story understanding in Genesis : exploring automatic plot construction through commonsense reasoning." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66440.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 72).
Whether through anecdotes, folklore, or formal history, humans learn the lessons and expectations of life from stories. If we are to build intelligent programs that learn as humans do, such programs must understand stories as well. Casting narrative text in an information-rich representation affords Al research platforms, such as the Genesis system, the capacity to understand the events of stories individually. To understand a story, however, a program must understand not just events, but also how events cause and motivate one another. In order to understand the relationships between these events, stories must be saturated with implicit details, connecting given events into coherent plot arcs. In my research, my first step was to analyze a range of story summaries in detail. Using nearly 50 rules, applicable to brief summaries of stories taken from international politics, group dynamics, and basic human emotion, I demonstrate how a rendition of Frank Herbert's Dune can be automatically understood so as to produce an interconnected story network of over one hundred events. My second step was to explore the nuances of rule construction, finding which rules are needed to create story networks reflective of proper implicit understanding and how we, as architects, must shape those rules to be understood. In particular, I develop a method that constructs new rules using the rules already embedded in stories, a representation of higher-order thinking that enables us to speak of our ideas as objects.
by Harold William Capen Low, IV.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Rode, Benjamin Paul. "Making sense of common sense : learning, fallibilism, and automated reasoning /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Moshir, Moghaddam Kianosh. "Automated Reasoning Support for Invasive Interactive Parallelization." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-84830.

Full text
Abstract:
To parallelize a sequential source code, a parallelization strategy must be defined that transforms the sequential source code into an equivalent parallel version. Since parallelizing compilers can sometimes transform sequential loops and other well-structured codes into parallel ones automatically, we are interested in finding a solution to parallelize semi-automatically codes that compilers are not able to parallelize automatically, mostly because of weakness of classical data and control dependence analysis, in order to simplify the process of transforming the codes for programmers.Invasive Interactive Parallelization (IIP) hypothesizes that by using anintelligent system that guides the user through an interactive process one can boost parallelization in the above direction. The intelligent system's guidance relies on a classical code analysis and pre-defined parallelizing transformation sequences. To support its main hypothesis, IIP suggests to encode parallelizing transformation sequences in terms of IIP parallelization strategies that dictate default ways to parallelize various code patterns by using facts which have been obtained both from classical source code analysis and directly from the user.In this project, we investigate how automated reasoning can supportthe IIP method in order to parallelize a sequential code with an acceptable performance but faster than manual parallelization. We have looked at two special problem areas: Divide and conquer algorithms and loops in the source codes. Our focus is on parallelizing four sequential legacy C programs such as: Quick sort, Merge sort, Jacobi method and Matrix multipliation and summation for both OpenMP and MPI environment by developing an interactive parallelizing assistance tool that provides users with the assistanceneeded for parallelizing a sequential source code.
APA, Harvard, Vancouver, ISO, and other styles
9

Nordström, Markus. "Automatic Source Code Classification : Classifying Source Code for a Case-Based Reasoning System." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-25519.

Full text
Abstract:
This work has investigated the possibility of classifying Java source code into cases for a case-based reasoning system. A Case-Based Reasoning system is a problem solving method in Artificial Intelligence that uses knowledge of previously solved problems to solve new problems. A case in case-based reasoning consists of two parts: the problem part and solution part. The problem part describes a problem that needs to be solved and the solution part describes how this problem was solved. In this work, the problem is described as a Java source file using words that describes the content in the source file and the solution is a classification of the source file along with the source code. To classify Java source code, a classification system was developed. It consists of four analyzers: type filter, documentation analyzer, syntactic analyzer and semantic analyzer. The type filter determines if a Java source file contains a class or interface. The documentation analyzer determines the level of documentation in asource file to see the usefulness of a file. The syntactic analyzer extracts statistics from the source code to be used for similarity, and the semantic analyzer extracts semantics from the source code. The finished classification system is formed as a kd-tree, where the leaf nodes contains the classified source files i.e. the cases. Furthermore, a vocabulary was developed to contain the domain knowledge about the Java language. The resulting kd-tree was found to be imbalanced when tested, as the majority of source files analyzed were placed inthe left-most leaf nodes. The conclusion from this was that using documentation as a part of the classification made the tree imbalanced and thus another way has to be found. This is due to the fact that source code is not documented to such an extent that it would be useful for this purpose.
APA, Harvard, Vancouver, ISO, and other styles
10

Fuchs, Alexander Tinelli C. "Evolving model evolution." [Iowa City, Iowa] : University of Iowa, 2009. http://ir.uiowa.edu/etd/361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Salama, Mohamed Ahmed Said. "Automatic test data generation from formal specification using genetic algorithms and case based reasoning." Thesis, University of the West of England, Bristol, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Batmaz, Firat. "Semi-Automatic assessment of students' graph-based diagrams." Thesis, Loughborough University, 2011. https://dspace.lboro.ac.uk/2134/8431.

Full text
Abstract:
Diagrams are increasingly used in many design methods, and are being taught in a variety of contexts in higher education such as database conceptual design or software design in computer science. They are an important part of many assessments. Currently computer aided assessments are widely used for multiple choice questions. They lack the ability to assess a student's knowledge in a more comprehensive way, which is required for diagram-type student work. The aim of this research is to develop a semi-automatic assessment framework, which enables the use of computer to support the assessment process of diagrammatic solutions, with the focus of ensuring the consistency of grades and feedback on solutions. A novel trace model, that captures design traces of student solutions, was developed as a part of the framework and was used to provide the matching criteria for grouping the solutions. A new marking style, partial marking, was developed to mark these solution groups manually. The Case-Based Reasoning method is utilised in the framework to mark some of the groups automatically. A guideline for scenario writing was proposed to increase the efficiency of automatic marking. A prototype diagram editor, a marking tool and scenario writing environment were implemented for the proposed framework in order to demonstrate proof of concept. The results of experiments show that the framework is feasible to use in the formative assessment and it provides consistent marking and personalised feedback to the students. The framework also has the potential to significantly reduce the time and effort required by the examiner to mark student diagrams. Although the constructed framework was specifically used for the assessment of database diagrams, the framework is generic enough to be used for other types of graph-based diagram.
APA, Harvard, Vancouver, ISO, and other styles
13

Loch-Dehbi, Sandra [Verfasser]. "Algebraic, logical and stochastic reasoning for the automatic prediction of 3d building structures / Sandra Loch-Dehbi." Bonn : Universitäts- und Landesbibliothek Bonn, 2021. http://d-nb.info/1227990502/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Al-Sultany, Ghaidaa Abdalhussein Billal. "Automatic message annotation and semantic interface for context aware mobile computing." Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/6564.

Full text
Abstract:
In this thesis, the concept of mobile messaging awareness has been investigated by designing and implementing a framework which is able to annotate the short text messages with context ontology for semantic reasoning inference and classification purposes. The annotated metadata of text message keywords are identified and annotated with concepts, entities and knowledge that drawn from ontology without the need of learning process and the proposed framework supports semantic reasoning based messages awareness for categorization purposes. The first stage of the research is developing the framework of facilitating mobile communication with short text annotated messages (SAMS), which facilitates annotating short text message with part of speech tags augmented with an internal and external metadata. In the SAMS framework the annotation process is carried out automatically at the time of composing a message. The obtained metadata is collected from the device’s file system and the message header information which is then accumulated with the message’s tagged keywords to form an XML file, simultaneously. The significance of annotation process is to assist the proposed framework during the search and retrieval processes to identify the tagged keywords and The Semantic Web Technologies are utilised to improve the reasoning mechanism. Later, the proposed framework is further improved “Contextual Ontology based Short Text Messages reasoning (SOIM)”. SOIM further enhances the search capabilities of SAMS by adopting short text message annotation and semantic reasoning capabilities with domain ontology as Domain ontology is modeled into set of ontological knowledge modules that capture features of contextual entities and features of particular event or situation. Fundamentally, the framework SOIM relies on the hierarchical semantic distance to compute an approximated match degree of new set of relevant keywords to their corresponding abstract class in the domain ontology. Adopting contextual ontology leverages the framework performance to enhance the text comprehension and message categorization. Fuzzy Sets and Rough Sets theory have been integrated with SOIM to improve the inference capabilities and system efficiency. Since SOIM is based on the degree of similarity to choose the matched pattern to the message, the issue of choosing the best-retrieved pattern has arisen during the stage of decision-making. Fuzzy reasoning classifier based rules that adopt the Fuzzy Set theory for decision making have been applied on top of SOIM framework in order to increase the accuracy of the classification process with clearer decision. The issue of uncertainty in the system has been addressed by utilising the Rough Sets theory, in which the irrelevant and indecisive properties which affect the framework efficiency negatively have been ignored during the matching process.
APA, Harvard, Vancouver, ISO, and other styles
15

Chan, Fiona. "Development of matrices abstract reasoning items to assess fluid intelligence." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277914.

Full text
Abstract:
Matrices reasoning tests, which contain missing pieces in matrices that participants attempt to figure out, are one of the most popular types of tests to measure general intelligence. This thesis introduces several methods to develop matrices items, and presents them in different test forms to assess general intelligence. Part 1 introduces the development of a matrices test with reference to Carpenter’s five rules of Raven’s Progressive Matrices. The test items developed were administered together with the Standard Ravens’ Progressive Matrices (SPM). Results based on confirmatory factor analysis and inter-item correlation demonstrate good construct validity and reliability. Item characteristics are explored with Item-Response Theory (IRT) analyses. Part 2 introduces the development of a large item bank with multiple alternatives for each SPM item, with reference to the item components of the original SPM. Results showed satisfactory test validity and reliability when using the alternative items in a test. Findings also support the hypothesis that the combination of item components accounts for item difficulty. The work lays the foundation for the future development of computer adaptive versions of Raven’s Progressive Matrices. Part 3 introduces the development of an automatic matrix item generator and illustrates the results of the analyses of the items generated using the distribution-of-three rule. Psychometric properties of the items generated are explored to support the validity of the generator. Figural complexity, features, and the frequency at which certain rules were used are discussed to account for the difficulty of the items. Results of further analyses to explore the underlying factors of the difficulty of the generated items are presented and discussed. Results showed that the suggested factors explain a substantial amount of the variance of item difficulty, but are insufficient to predict the item difficulty. Adaptive on-the-fly item generation is yet to be possible for the test at this stage. Overall, the methods for creating matrices reasoning tests introduced in the dissertation provide a useful reference for research on abstract reasoning and fluid intelligence measurements. Implications for other areas of psychometric research are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
16

Belard, Nuno. "Reasoning about models: detecting and isolating abnormalities in diagnostic systems." Phd thesis, Université Paul Sabatier - Toulouse III, 2012. http://tel.archives-ouvertes.fr/tel-00719547.

Full text
Abstract:
Dans le cadre du diagnostic à base de Modèle, un ensemble de règles d'inférence est typiquement exploité pour calculer des diagnostics, ceci en utilisant une théorie scientifique et mathématique sur le système à diagnostiquer, ainsi qu'un ensemble d'observations. Contrairement aux hypothèses classiques, les Modèles sont souvent anormaux vis-à-vis d'un ensemble de propriétés requises. Naturellement, cela affecte la qualité des diagnostics [à Airbus]. Une théorie sur la réalité, l'information et la cognition est créé pour redéfinir, dans une perspective basée sur la théorie des modèles, le cadre classique de diagnostic à base de Modèle. Ceci rend possible la formalisation des anomalies et de leur relation avec des propriétés des diagnostics. Avec ce travail et avec l'idée qu'un système de diagnostic implémenté peut être vu comme un objet à diagnostiquer, une théorie de méta-diagnostic est développée, permettant la détection et isolation d'anomalies dans les Modèles des systèmes de diagnostic. Cette théorie est mise en pratique à travers d'un outil, MEDITO; et est testée avec succès à travers un ensemble de problèmes industriels, à Airbus. Comme des différents systèmes de diagnostic Airbus, souffrant d'anomalies variées, peuvent calculer des diagnostics différents, un ensemble de méthodes et outils et développé pour: 1) déterminer la cohérence entre diagnostics et 2) valider et comparer la performance de ces systèmes de diagnostic. Ce travail dépend d'un pont original entre le cadre de diagnostic Airbus et son équivalent académique. Finalement, la théorie de méta-diagnostic est généralisée pour prendre en compte des métasystèmes autres que des systèmes de diagnostic implémentés.
APA, Harvard, Vancouver, ISO, and other styles
17

Huang, Xingang. "RELSA : automatic analysis of spatial data sets using visual reasoning techniques with an application to weather data analysis /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488193665235735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Fuchs, Alexander. "Evolving model evolution." Diss., University of Iowa, 2009. https://ir.uiowa.edu/etd/361.

Full text
Abstract:
Automated theorem proving is a method to establish or disprove logical theorems. While these can be theorems in the classical mathematical sense, we are more concerned with logical encodings of properties of algorithms, hardware and software. Especially in the area of hardware verification, propositional logic is used widely in industry. Satisfiability Module Theories (SMT) is a set of logics which extend propositional logic with theories relevant for specific application domains. In particular, software verification has received much attention, and efficient algorithms have been devised for reasoning over arithmetic and data types. Built-in support for theories by decision procedures is often significantly more efficient than reductions to propositional logic (SAT). Most efficient SAT solvers are based on the DPLL architecture, which is also the basis for most efficient SMT solvers. The main shortcoming of both kinds of logics is the weak support for non-ground reasoning, which noticeably limits the applicability to real world systems. The Model Evolution Calculus (ME) was devised as a lifting of the DPLL architecture from the propositional setting to full first-order logic. In previous work, we created the solver Darwin as an implementation of ME, and showed how to adapt improvements from the DPLL setting. The first half of this thesis is concerned with ME and Darwin. First, we lift a further crucial ingredient of SAT and SMT solvers, lemma-learning, to Darwin and evaluate its benefits. Then, we show how to use Darwin for finite model finding, and how this application benefits from lemma-learning. In the second half of the thesis we present Model Evolution with Linear Integer Arithmetic (MELIA), a calculus combining function-free first-order logic with linear integer arithmetic (LIA). MELIA is based on ME and supports similar inference rules and redundancy criteria. We prove the correctness of the calculus, and show how to obtain complete proof procedures and decision procedures for some interesting classes of MELIA's logic. Finally, we explain in detail how MELIA can be implemented efficiently based on the techniques employed in SMT solvers and Darwin.
APA, Harvard, Vancouver, ISO, and other styles
19

Larew, Lalah W. "The effects of learning geometry using a computer-generated automatic draw tool on the levels of reasoning of college developmental students." Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=581.

Full text
Abstract:
Thesis (Ed. D.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains ix, 103 p. Vita. Includes abstract. Includes bibliographical references (p. 91-96).
APA, Harvard, Vancouver, ISO, and other styles
20

Romero, Moral Oscar. "Automating the multidimensional design of data warehouses." Doctoral thesis, Universitat Politècnica de Catalunya, 2010. http://hdl.handle.net/10803/6670.

Full text
Abstract:
Les experiències prèvies en l'àmbit dels magatzems de dades (o data warehouse), mostren que l'esquema multidimensional del data warehouse ha de ser fruit d'un enfocament híbrid; això és, una proposta que consideri tant els requeriments d'usuari com les fonts de dades durant el procés de disseny.Com a qualsevol altre sistema, els requeriments són necessaris per garantir que el sistema desenvolupat satisfà les necessitats de l'usuari. A més, essent aquest un procés de reenginyeria, les fonts de dades s'han de tenir en compte per: (i) garantir que el magatzem de dades resultant pot ésser poblat amb dades de l'organització, i, a més, (ii) descobrir capacitats d'anàlisis no evidents o no conegudes per l'usuari.Actualment, a la literatura s'han presentat diversos mètodes per donar suport al procés de modelatge del magatzem de dades. No obstant això, les propostes basades en un anàlisi dels requeriments assumeixen que aquestos són exhaustius, i no consideren que pot haver-hi informació rellevant amagada a les fonts de dades. Contràriament, les propostes basades en un anàlisi exhaustiu de les fonts de dades maximitzen aquest enfocament, i proposen tot el coneixement multidimensional que es pot derivar des de les fonts de dades i, conseqüentment, generen massa resultats. En aquest escenari, l'automatització del disseny del magatzem de dades és essencial per evitar que tot el pes de la tasca recaigui en el dissenyador (d'aquesta forma, no hem de confiar únicament en la seva habilitat i coneixement per aplicar el mètode de disseny elegit). A més, l'automatització de la tasca allibera al dissenyador del sempre complex i costós anàlisi de les fonts de dades (que pot arribar a ser inviable per grans fonts de dades).Avui dia, els mètodes automatitzables analitzen en detall les fonts de dades i passen per alt els requeriments. En canvi, els mètodes basats en l'anàlisi dels requeriments no consideren l'automatització del procés, ja que treballen amb requeriments expressats en llenguatges d'alt nivell que un ordenador no pot manegar. Aquesta mateixa situació es dona en els mètodes híbrids actual, que proposen un enfocament seqüencial, on l'anàlisi de les dades es complementa amb l'anàlisi dels requeriments, ja que totes dues tasques pateixen els mateixos problemes que els enfocament purs.En aquesta tesi proposem dos mètodes per donar suport a la tasca de modelatge del magatzem de dades: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Totes dues consideren els requeriments i les fonts de dades per portar a terme la tasca de modelatge i a més, van ser pensades per superar les limitacions dels enfocaments actuals.1. MDBE segueix un enfocament clàssic, en el que els requeriments d'usuari són coneguts d'avantmà. Aquest mètode es beneficia del coneixement capturat a les fonts de dades, però guia el procés des dels requeriments i, conseqüentment, és capaç de treballar sobre fonts de dades semànticament pobres. És a dir, explotant el fet que amb uns requeriments de qualitat, podem superar els inconvenients de disposar de fonts de dades que no capturen apropiadament el nostre domini de treball.2. A diferència d'MDBE, AMDO assumeix un escenari on es disposa de fonts de dades semànticament riques. Per aquest motiu, dirigeix el procés de modelatge des de les fonts de dades, i empra els requeriments per donar forma i adaptar els resultats generats a les necessitats de l'usuari. En aquest context, a diferència de l'anterior, unes fonts de dades semànticament riques esmorteeixen el fet de no tenir clars els requeriments d'usuari d'avantmà.Cal notar que els nostres mètodes estableixen un marc de treball combinat que es pot emprar per decidir, donat un escenari concret, quin enfocament és més adient. Per exemple, no es pot seguir el mateix enfocament en un escenari on els requeriments són ben coneguts d'avantmà i en un escenari on aquestos encara no estan clars (un cas recorrent d'aquesta situació és quan l'usuari no té clares les capacitats d'anàlisi del seu propi sistema). De fet, disposar d'uns bons requeriments d'avantmà esmorteeix la necessitat de disposar de fonts de dades semànticament riques, mentre que a l'inversa, si disposem de fonts de dades que capturen adequadament el nostre domini de treball, els requeriments no són necessaris d'avantmà. Per aquests motius, en aquesta tesi aportem un marc de treball combinat que cobreix tots els possibles escenaris que podem trobar durant la tasca de modelatge del magatzem de dades.
Previous experiences in the data warehouse field have shown that the data warehouse multidimensional conceptual schema must be derived from a hybrid approach: i.e., by considering both the end-user requirements and the data sources, as first-class citizens. Like in any other system, requirements guarantee that the system devised meets the end-user necessities. In addition, since the data warehouse design task is a reengineering process, it must consider the underlying data sources of the organization: (i) to guarantee that the data warehouse must be populated from data available within the organization, and (ii) to allow the end-user discover unknown additional analysis capabilities.Currently, several methods for supporting the data warehouse modeling task have been provided. However, they suffer from some significant drawbacks. In short, requirement-driven approaches assume that requirements are exhaustive (and therefore, do not consider the data sources to contain alternative interesting evidences of analysis), whereas data-driven approaches (i.e., those leading the design task from a thorough analysis of the data sources) rely on discovering as much multidimensional knowledge as possible from the data sources. As a consequence, data-driven approaches generate too many results, which mislead the user. Furthermore, the design task automation is essential in this scenario, as it removes the dependency on an expert's ability to properly apply the method chosen, and the need to analyze the data sources, which is a tedious and timeconsuming task (which can be unfeasible when working with large databases). In this sense, current automatable methods follow a data-driven approach, whereas current requirement-driven approaches overlook the process automation, since they tend to work with requirements at a high level of abstraction. Indeed, this scenario is repeated regarding data-driven and requirement-driven stages within current hybrid approaches, which suffer from the same drawbacks than pure data-driven or requirement-driven approaches.In this thesis we introduce two different approaches for automating the multidimensional design of the data warehouse: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Both approaches were devised to overcome the limitations from which current approaches suffer. Importantly, our approaches consider opposite initial assumptions, but both consider the end-user requirements and the data sources as first-class citizens.1. MDBE follows a classical approach, in which the end-user requirements are well-known beforehand. This approach benefits from the knowledge captured in the data sources, but guides the design task according to requirements and consequently, it is able to work and handle semantically poorer data sources. In other words, providing high-quality end-user requirements, we can guide the process from the knowledge they contain, and overcome the fact of disposing of bad quality (from a semantical point of view) data sources.2. AMDO, as counterpart, assumes a scenario in which the data sources available are semantically richer. Thus, the approach proposed is guided by a thorough analysis of the data sources, which is properly adapted to shape the output result according to the end-user requirements. In this context, disposing of high-quality data sources, we can overcome the fact of lacking of expressive end-user requirements.Importantly, our methods establish a combined and comprehensive framework that can be used to decide, according to the inputs provided in each scenario, which is the best approach to follow. For example, we cannot follow the same approach in a scenario where the end-user requirements are clear and well-known, and in a scenario in which the end-user requirements are not evident or cannot be easily elicited (e.g., this may happen when the users are not aware of the analysis capabilities of their own sources). Interestingly, the need to dispose of requirements beforehand is smoothed by the fact of having semantically rich data sources. In lack of that, requirements gain relevance to extract the multidimensional knowledge from the sources.So that, we claim to provide two approaches whose combination turns up to be exhaustive with regard to the scenarios discussed in the literature
APA, Harvard, Vancouver, ISO, and other styles
21

Romero, Moral Óscar. "Automating the multidimensional design of data warehouses." Doctoral thesis, Universitat Politècnica de Catalunya, 2010. http://hdl.handle.net/10803/6670.

Full text
Abstract:
Les experiències prèvies en l'àmbit dels magatzems de dades (o data warehouse), mostren que l'esquema multidimensional del data warehouse ha de ser fruit d'un enfocament híbrid; això és, una proposta que consideri tant els requeriments d'usuari com les fonts de dades durant el procés de disseny.
Com a qualsevol altre sistema, els requeriments són necessaris per garantir que el sistema desenvolupat satisfà les necessitats de l'usuari. A més, essent aquest un procés de reenginyeria, les fonts de dades s'han de tenir en compte per: (i) garantir que el magatzem de dades resultant pot ésser poblat amb dades de l'organització, i, a més, (ii) descobrir capacitats d'anàlisis no evidents o no conegudes per l'usuari.

Actualment, a la literatura s'han presentat diversos mètodes per donar suport al procés de modelatge del magatzem de dades. No obstant això, les propostes basades en un anàlisi dels requeriments assumeixen que aquestos són exhaustius, i no consideren que pot haver-hi informació rellevant amagada a les fonts de dades. Contràriament, les propostes basades en un anàlisi exhaustiu de les fonts de dades maximitzen aquest enfocament, i proposen tot el coneixement multidimensional que es pot derivar des de les fonts de dades i, conseqüentment, generen massa resultats. En aquest escenari, l'automatització del disseny del magatzem de dades és essencial per evitar que tot el pes de la tasca recaigui en el dissenyador (d'aquesta forma, no hem de confiar únicament en la seva habilitat i coneixement per aplicar el mètode de disseny elegit). A més, l'automatització de la tasca allibera al dissenyador del sempre complex i costós anàlisi de les fonts de dades (que pot arribar a ser inviable per grans fonts de dades).
Avui dia, els mètodes automatitzables analitzen en detall les fonts de dades i passen per alt els requeriments. En canvi, els mètodes basats en l'anàlisi dels requeriments no consideren l'automatització del procés, ja que treballen amb requeriments expressats en llenguatges d'alt nivell que un ordenador no pot manegar. Aquesta mateixa situació es dona en els mètodes híbrids actual, que proposen un enfocament seqüencial, on l'anàlisi de les dades es complementa amb l'anàlisi dels requeriments, ja que totes dues tasques pateixen els mateixos problemes que els enfocament purs.

En aquesta tesi proposem dos mètodes per donar suport a la tasca de modelatge del magatzem de dades: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Totes dues consideren els requeriments i les fonts de dades per portar a terme la tasca de modelatge i a més, van ser pensades per superar les limitacions dels enfocaments actuals.

1. MDBE segueix un enfocament clàssic, en el que els requeriments d'usuari són coneguts d'avantmà. Aquest mètode es beneficia del coneixement capturat a les fonts de dades, però guia el procés des dels requeriments i, conseqüentment, és capaç de treballar sobre fonts de dades semànticament pobres. És a dir, explotant el fet que amb uns requeriments de qualitat, podem superar els inconvenients de disposar de fonts de dades que no capturen apropiadament el nostre domini de treball.
2. A diferència d'MDBE, AMDO assumeix un escenari on es disposa de fonts de dades semànticament riques. Per aquest motiu, dirigeix el procés de modelatge des de les fonts de dades, i empra els requeriments per donar forma i adaptar els resultats generats a les necessitats de l'usuari. En aquest context, a diferència de l'anterior, unes fonts de dades semànticament riques esmorteeixen el fet de no tenir clars els requeriments d'usuari d'avantmà.

Cal notar que els nostres mètodes estableixen un marc de treball combinat que es pot emprar per decidir, donat un escenari concret, quin enfocament és més adient. Per exemple, no es pot seguir el mateix enfocament en un escenari on els requeriments són ben coneguts d'avantmà i en un escenari on aquestos encara no estan clars (un cas recorrent d'aquesta situació és quan l'usuari no té clares les capacitats d'anàlisi del seu propi sistema). De fet, disposar d'uns bons requeriments d'avantmà esmorteeix la necessitat de disposar de fonts de dades semànticament riques, mentre que a l'inversa, si disposem de fonts de dades que capturen adequadament el nostre domini de treball, els requeriments no són necessaris d'avantmà. Per aquests motius, en aquesta tesi aportem un marc de treball combinat que cobreix tots els possibles escenaris que podem trobar durant la tasca de modelatge del magatzem de dades.
Previous experiences in the data warehouse field have shown that the data warehouse multidimensional conceptual schema must be derived from a hybrid approach: i.e., by considering both the end-user requirements and the data sources, as first-class citizens. Like in any other system, requirements guarantee that the system devised meets the end-user necessities. In addition, since the data warehouse design task is a reengineering process, it must consider the underlying data sources of the organization: (i) to guarantee that the data warehouse must be populated from data available within the organization, and (ii) to allow the end-user discover unknown additional analysis capabilities.

Currently, several methods for supporting the data warehouse modeling task have been provided. However, they suffer from some significant drawbacks. In short, requirement-driven approaches assume that requirements are exhaustive (and therefore, do not consider the data sources to contain alternative interesting evidences of analysis), whereas data-driven approaches (i.e., those leading the design task from a thorough analysis of the data sources) rely on discovering as much multidimensional knowledge as possible from the data sources. As a consequence, data-driven approaches generate too many results, which mislead the user. Furthermore, the design task automation is essential in this scenario, as it removes the dependency on an expert's ability to properly apply the method chosen, and the need to analyze the data sources, which is a tedious and timeconsuming task (which can be unfeasible when working with large databases). In this sense, current automatable methods follow a data-driven approach, whereas current requirement-driven approaches overlook the process automation, since they tend to work with requirements at a high level of abstraction. Indeed, this scenario is repeated regarding data-driven and requirement-driven stages within current hybrid approaches, which suffer from the same drawbacks than pure data-driven or requirement-driven approaches.
In this thesis we introduce two different approaches for automating the multidimensional design of the data warehouse: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Both approaches were devised to overcome the limitations from which current approaches suffer. Importantly, our approaches consider opposite initial assumptions, but both consider the end-user requirements and the data sources as first-class citizens.

1. MDBE follows a classical approach, in which the end-user requirements are well-known beforehand. This approach benefits from the knowledge captured in the data sources, but guides the design task according to requirements and consequently, it is able to work and handle semantically poorer data sources. In other words, providing high-quality end-user requirements, we can guide the process from the knowledge they contain, and overcome the fact of disposing of bad quality (from a semantical point of view) data sources.
2. AMDO, as counterpart, assumes a scenario in which the data sources available are semantically richer. Thus, the approach proposed is guided by a thorough analysis of the data sources, which is properly adapted to shape the output result according to the end-user requirements. In this context, disposing of high-quality data sources, we can overcome the fact of lacking of expressive end-user requirements.

Importantly, our methods establish a combined and comprehensive framework that can be used to decide, according to the inputs provided in each scenario, which is the best approach to follow. For example, we cannot follow the same approach in a scenario where the end-user requirements are clear and well-known, and in a scenario in which the end-user requirements are not evident or cannot be easily elicited (e.g., this may happen when the users are not aware of the analysis capabilities of their own sources). Interestingly, the need to dispose of requirements beforehand is smoothed by the fact of having semantically rich data sources. In lack of that, requirements gain relevance to extract the multidimensional knowledge from the sources.
So that, we claim to provide two approaches whose combination turns up to be exhaustive with regard to the scenarios discussed in the literature
APA, Harvard, Vancouver, ISO, and other styles
22

Marin-Urias, Luis Felipe. "Planification et contrôle de mouvements en interaction avec l'homme. Reasoning about space for human-robot interaction." Phd thesis, Université Paul Sabatier - Toulouse III, 2009. http://tel.archives-ouvertes.fr/tel-00468918.

Full text
Abstract:
L'interaction Homme-Robot est un domaine de recherche qui se développe de manière expo-nentielle durant ces dernières années, ceci nous procure de nouveaux défis au raisonnement géométrique du robot et au partage d'espace. Le robot pour accomplir une tâche, doit non seulement raisonner sur ses propres capacités, mais également prendre en considération la perception humaine, c'est à dire "Le robot doit se placer du point de vue de l'humain". Chez l'homme, la capacité de prise de perspective visuelle commence à se manifester à partir du 24ème mois. Cette capacité est utilisée pour déterminer si une autre personne peut voir un objet ou pas. La mise en place de ce genre de capacités sociales améliorera les capacités cognitives du robot et aidera le robot pour une meilleure interaction avec les hommes. Dans ce travail, nous présentons un mécanisme de raisonnement spatial de point de vue géométrique qui utilise des concepts psychologiques de la "prise de perspective" et "de la rotation mentale" dans deux cadres généraux: - La planification de mouvement pour l'interaction homme-robot: le robot utilise "la prise de perspective égocentrique" pour évaluer plusieurs configurations où le robot peut effectuer differentes tâches d'interaction. - Une interaction face à face entre l'homme et le robot : le robot emploie la prise de point de vue de l'humain comme un outil géométrique pour comprendre l'attention et l'intention humaine afin d'effectuer des tâches coopératives.
APA, Harvard, Vancouver, ISO, and other styles
23

Reimann, Johan Michael. "Using Multiplayer Differential Game Theory to Derive Efficient Pursuit-Evasion Strategies for Unmanned Aerial Vehicles." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16151.

Full text
Abstract:
In recent years, Unmanned Aerial Vehicles (UAVs) have been used extensively in military conflict situations to execute intelligence, surveillance and reconnaissance missions. However, most of the current UAV platforms have limited collaborative capabilities, and consequently they must be controlled individually by operators on the ground. The purpose of the research presented in this thesis is to derive algorithms that can enable multiple UAVs to reason about the movements of multiple ground targets and autonomously coordinate their efforts in real-time to ensure that the targets do not escape. By improving the autonomy of multivehicle systems, the workload placed on the command and control operators is reduced significantly. To derive effective adversarial control algorithms, the adversarial scenario is modeled as a multiplayer differential game. However, due to the inherent computational complexity of multiplayer differential games, three less computationally demanding differential pursuit-evasion game-based algorithms are presented. The purpose of the algorithms is to quickly derive interception strategies for a team of autonomous vehicles. The algorithms are applicable to scenarios with different base assumptions, that is, the three algorithms are meant to complement one another by addressing different types of adversarial problems.
APA, Harvard, Vancouver, ISO, and other styles
24

Lundberg, Didrik. "Provably Sound and Secure Automatic Proving and Generation of Verification Conditions." Thesis, KTH, Teoretisk datalogi, TCS, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-239441.

Full text
Abstract:
Formal verification of programs can be done with the aid of an interactive theorem prover. The program to be verified is represented in an intermediate language representation inside the interactive theorem prover, after which statements and their proofs can be constructed. This is a process that can be automated to a high degree. This thesis presents a proof procedure to efficiently generate a theorem stating the weakest precondition for a program to terminate successfully in a state upon which a certain postcondition is placed. Specifically, the Poly/ML implementation of the SML metalanguage is used to generate a theorem in the HOL4 interactive theorem prover regarding the properties of a program written in BIR, an abstract intermediate representation of machine code used in the PROSPER project.
Bevis av säkerhetsegenskaper hos program genom formell verifiering kan göras med hjälp av interaktiva teorembevisare. Det program som skall verifieras representeras i en mellanliggande språkrepresentation inuti den interaktiva teorembevisaren, varefter påståenden kan konstrueras, som sedan bevisas. Detta är en process som kan automatiseras i hög grad. Här presenterar vi en metod för att effektivt skapa och bevisa ett teorem som visar sundheten hos den svagaste förutsättningen för att ett program avslutas framgångsrikt under ett givet postvillkor. Specifikt använder vi Poly/ML-implementationen av SML för att generera ett teorem i den interaktiva teorembevisaren HOL4 som beskriver egenskaper hos ett program i BIR, en abstrakt mellanrepresentation av maskinkod som används i PROSPER-projektet.
APA, Harvard, Vancouver, ISO, and other styles
25

Corrêa, da Silva Flávio S. "Automated reasoning with uncertainties." Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/19647.

Full text
Abstract:
In this work we assume that uncertainty is a multifaceted concept which admits several different measures, and present a system for automated reasoning with multiple representations of uncertainty. Our focus is on problems which present more than one of these facets and therefore in which a multivalued representation of uncertainty and the study of its possibility of computational realisation are important for designing and implementing knowledge-based systems. We present a case study on developing a computational language for reasoning with uncertainty, starting with a semantically sound and computationally tractable language and gradually extending it with specialised syntactic constructs to represent measures of uncertainty, preserving its unambiguous semantic characterisation and computability properties. Our initial language is the language of normal clauses with SLDNF as the inference rule, and we select three facets of uncertainty, which are not exhaustive but cover many situations found in practical problems: vagueness, statistics and degrees of belief. To each of these facets we associate a specific measure: fuzzy measures to vagueness, probabilities on the domain to statistics and probabilities on possible worlds to degrees of belief. The resulting language is semantically sound and computationally tractable, and admits relatively efficient implementations employing α-β pruning and caching.
APA, Harvard, Vancouver, ISO, and other styles
26

Yerikalapudi, Aparna Varsha. "Answer Extraction In Automated Reasoning." Scholarly Repository, 2008. http://scholarlyrepository.miami.edu/oa_theses/167.

Full text
Abstract:
One aspect of Automated Reasoning (AR) deals with writing computer programs that can answer questions using logical reasoning. An Automated Theorem Proving system (ATP system) translates a question to be answered to a first-order logic conjecture, and attempts to prove the conjecture from a set of axioms provided, thereby leading to a proof. If a proof is found an answer extraction method can be applied to answer the original question. If more than one proof is possible, more than one answer may need to be extracted. For ATP systems that can find only one answer at a time, to answer questions that yield multiple answers, the ATP system can be re-invoked with a modified question to find other possible answers. In this thesis, an answer extraction method has been designed to extract more than one answer when an ATP system is used to answer a question that has multiple answers. The method is implemented in an interactive computer program and the process is called multiple-answer extraction. The answer extraction software, called the multi-answer system, is a three layered software architecture model. SNARK, at the bottom-most layer, serves as the ATP system that finds single answers. The answer extractor, in the middle layer, extracts possible answers by re-invoking the ATP system. The top layer compares the answers extracted to the user's expected answers. The software is command line driven. Keywords such as all, some, n (where n is a number), while and until are specified on the command line to limit the number of answers to be extracted. The top layer allows the user to check properties of the answer, e.g., if a specific element belongs to the set of answers obtained, or if the user's set of answers is a subset of the answers returned by the multi-answer system. This is done using set operations, such as subset, element of, union, difference, intersection, on the user's set of answers and the extracted set of answers.
APA, Harvard, Vancouver, ISO, and other styles
27

Horsfall, Benjamin. "Automated reasoning for reflective programs." Thesis, University of Sussex, 2014. http://sro.sussex.ac.uk/id/eprint/49871/.

Full text
Abstract:
Reflective programming allows one to construct programs that manipulate or examine their behaviour or structure at runtime. One of the benefits is the ability to create generic code that is able to adapt to being incorporated into different larger programs, without modifications to suit each concrete setting. Due to the runtime nature of reflection, static verification is difficult and has been largely ignored or only weakly supported. This work focusses on supporting verification for cases where generic code that uses reflection is to be used in a “closed” program where the structure of the program is known in advance. This thesis first describes extensions to a verification system and semi-automated tool that was developed to reason about heap-manipulating programs which may store executable code on the heap. These extensions enable the tool to support a wider range of programs on account of the ability to provide stronger specifications. The system's underlying logic is an extension of separation logic that includes nested Hoare-triples which describe behaviour of stored code. Using this verification tool, with the crucial enhancements in this work, a specified reflective library has been created. The resulting work presents an approach where metadata is stored on the heap such that the reflective library can be implemented using primitive commands and then specified and verified, rather than developing new proof rules for the reflective operations. The supported reflective functions characterise a subset of Java's reflection library and the specifications guarantee both memory safety and a degree of functional correctness. To demonstrate the application of the developed solution two case studies are carried out, each of which focuses on different reflection features. The contribution to knowledge is a first look at how to support semi-automated static verification of reflective programs with meaningful specifications.
APA, Harvard, Vancouver, ISO, and other styles
28

Wong, Leon Chih Wen. "Automated reasoning about classical mechanics." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/35408.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 105-107).
by Leon Chih Wen Wong.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
29

Liang, Tianyi. "Automated reasoning over string constraints." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1478.

Full text
Abstract:
An increasing number of applications in verification and security rely on or could benefit from automatic solvers that can check the satisfiability of constraints over a rich set of data types that includes character strings. Unfortunately, most string solvers today are standalone tools that can reason only about some fragment of the theory of strings and regular expressions, sometimes with strong restrictions on the expressiveness of their input language (such as, length bounds on all string variables). These specialized solvers reduce string problems to satisfiability problems over specific data types, such as bit vectors, or to automata decision problems. On the other side, despite their power and success as back-end reasoning engines, general-purpose Satisfiability Modulo Theories (SMT) solvers so far have provided minimal or no native support for string reasoning. This thesis presents a deductive calculus describing a new algebraic approach that allows solving constraints over the theory of unbounded strings and regular expressions natively, without reduction to other problems. We provide proofs of refutation soundness and solution soundness of our calculus, and solution completeness under a fair proof strategy. Moreover, we show that our calculus is a decision procedure for the theory of regular language membership with length constraints. We have implemented our calculus as a string solver for the theory of (unbounded) strings with concatenation, length, and membership in regular languages, and incorporated it into the SMT solver CVC4 to expand its already large set of built-in theories. This work makes CVC4 the first SMT solver that is able to accept and process a rich set of mixed constraints over strings, integers, reals, arrays and other data types. In addition, our initial experimental results show that, over string problems, CVC4 is highly competitive with specialized string solvers with a comparable input language. We believe that the approach we described in this thesis provides a new idea for string-based formal methods.
APA, Harvard, Vancouver, ISO, and other styles
30

Gorín, Daniel Alejandro. "Automated reasoning techniques for hybrid logics." Thesis, Nancy 1, 2009. http://www.theses.fr/2009NAN10131/document.

Full text
Abstract:
Les logiques hybrides accroissent les logiques modales avec des éléments pour décrire et raisonner à propos de l'identité, ce qui est crucial dans certaines situations. Les logiques modales que l'on connaît comme ``hybrides'' aujourd'hui remontent au travaux de Prior dans les années 1960, mais leur étude systématique n'a commencé qu'au bout des années 1990. Elles sont intéressantes en grande partie car elles comblent un manque en matière d'expressivité dans les logiques modales. D'ailleurs, elles sont connues parfois comme des ``logiques modales avec égalité''. L'un des thèmes centraux de cette thèse est le problème de la satisfiabilité pour celle qui est probablement la mieux connue des logiques hybrides: le système H(@,dwn), et pour certaines de ses sous-logiques. La satisfiabilité est le problème fondamental en raisonnement automatique. Dans le cas des logiques hybrides, elle a été étudiée essentiellement par la méthode des tableaux. Dans cette thèse, nous essayons de compléter le panorama en explorant la satisfiabilité des logiques hybrides par d'autres méthodes: la résolution du premier ordre et des variantes de calcul de résolution qui manipulent directement des formules hybrides. Nous présentons un certain nombre de traductions en temps linéaire de H(@,dwn) à la logique de premier ordre qui préservent la satisfiabilité. Elles sont conçues de façon telle qu'elles réduisent l'espace de recherche. Ensuite nous dirigeons notre attention vers les calculs qui manipulent directement des formules hybrides. En particulier, nous considérons le calcul de résolution directe. Inspirés par la résolution du premier ordre, nous transformons ce calcul en un calcul de résolution ordonnée avec des fonctions de sélection, et nous prouvons qu'il a la propriété de réduction des contre-exemples. Nous concluons ainsi qu'il est réfutationnellement complet et qu'il est compatible avec le fameux critère standard de redondance. Nous montrons également qu'une version raffinée de ce calcul constitue un procédure de décision pour H(@), un fragment décidable de H(@,dwn). Dans la dernière partie de cette thèse, nous explorons certaines formes normales des logiques hybrides et d'autres logiques modales étendues. Nous nous intéressons aux formes normales où certaines modalités ne sont jamais présentes dans la portée d'autres opérateurs modaux. Nous montrons qu'il est possible de profiter de ce type de transformations sous la forme d'un prétraitement, dans le but de réduire le nombre d'inférences nécessaires pour un prouveur modal. En nous efforçant de formuler ces résultats en tenant compte d'autres logiques modales étendues, nous arrivons à une formulation de la sémantique modale par un nouveau type de modèles définis de façon coinductif. Plusieurs logiques modales étendues (dont les logiques hybrides) peuvent être définies par des classes de modèles coinductifs. Ainsi, des résultats qui étaient habituellement prouvés séparément pour chaque langage (mais dont la preuve n'était souvent que de routine) peuvent être démontrés d'une façon générale
Hybrid logics augment classical modal logics with machinery for describing and reasoning about identity, which is crucial in many settings. Although modal logics we would today call ``hybrid'' can be traced back to the work of Prior in the 1960's, their systematic study only began in the late 1990's. Part of their interest comes from the fact they fill an important expressivity gap in modal logics. In fact, they are sometimes referred to as ``modal logics with equality''. One of the unifying themes of this thesis is the satisfiability problem for the arguably best-known hybrid logic, H(@,dwn), and some of its sublogics. Satisfiability is the basic problem in automated reasoning. In the case of hybrid logics it has been studied fundamentally using the tableaux method. In this thesis we attempt to complete the picture by investigating satisfiability for hybrid logics using first-order resolution (via translations) and variations of a resolution calculus that operates directly on hybrid formulas. We present firstly several satisfiability-preserving, linear-time translations from H(@,dwn) to first-order logic. These are conceived in a way such that they tend to reduce the search space of a resolution-based theorem prover for first-order logic. We then move our attention to resolution-based calculi that work directly on hybrid formulas. In particular, we will consider the so-called direct resolution calculus. Inspired by first-order logic resolution, we turn this calculus into a calculus of ordered resolution with selection functions and prove that it possesses the reduction property for counterexamples from which it follows its completeness and that it is compatible with the well-known standard redundancy criterion. We also show that certain refinement of this calculus constitutes a decision procedure for H(@), a decidable fragment of H(@,dwn). In the last part of this thesis we investigate certain normal forms for hybrid logics and other extended modal logics. We are interested in normal forms where certain modalities can be guaranteed not to occur under the scope of other modal operators. We will see that these kind of transformations can be exploited in a pre-processing step in order to reduce the number of inferences required by a modal prover. In an attempt to formulate these results in a way that encompasses also other extended modal logics, we arrived at a formulation of modal semantics in terms of a novel type of models that are coinductively defined. Many extended modal logics (such as hybrid logics) can be defined in terms of classes of coinductive models. This way, results that had to be proved separately for each different language (but whose proofs were known to be mere routine) now can be proved in a general way
APA, Harvard, Vancouver, ISO, and other styles
31

Hoder, Krystof. "Practical aspects of automated first-order reasoning." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/practical-aspects-of-automated-firstorder-reasoning(1331ec1f-802c-4aeb-9265-1248d8db2a8e).html.

Full text
Abstract:
Our work focuses on bringing the first-order reasoning closer to practicalapplications, particularly in software and hardware verification. The aim is to develop techniques that make first-order reasoners more scalablefor large problems and suitable for the applications. In pursuit of this goal the work focuses in three main directions. First, wedevelop an algorithm for an efficient pre-selection of axioms. This algorithmis already being widely used by the community and enables off-the-shelf theoremprovers to work with problems having millions of axioms that would otherwisebe overwhelming for them. Secondly, we focus on the saturation algorithm itself, and develop anew calculus for separate handling of propositional predicates. We also do anextensive research on various ways of clause splitting within the saturationalgorithm. The third main block of our work is focused on the use of saturation basedfirst-order theorem provers for software verification, particularly forgenerating invariants and computing interpolants. We base our work on theoretical results of Kovacs and Voronkov published in2009 on the CADE and FASE conferences. We develop a practical implementationwhich embraces all the extensions of the basic resolution and superposition calculus that are contained in the theorem prover Vampire. We have also developed a unique proof transforming algorithm which optimizes the computed interpolantswith respect to a user specified cost function.
APA, Harvard, Vancouver, ISO, and other styles
32

Alvarez, Divo Carlos Eduardo. "Automated Reasoning on Feature Models via Constraint Programming." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-156437.

Full text
Abstract:
Feature models are often used in software product lines to represent a set of products and reason over their properties, similarities and differences, costs, etc. The problem becomes automating such reasoning which translates into a positive impact in terms of production, cost, and creation of the final products. To approach this matter we take advantage of the benefits of the constraint programming technology, which has proven to be most effective when solving problems of large complexity. Throughout the thesis we state the reasons for choosing this tool, evaluating its advantages and drawbacks, and showing results that support the conveniences of using constraint programming. Keywords: feature models, software product lines, constraint programming.
APA, Harvard, Vancouver, ISO, and other styles
33

Sharpe, David. "The AllPaths automated reasoning procedure with clause trees." Thesis, University of New Brunswick, 1996. http://hdl.handle.net/1882/553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Pelzer, Björn [Verfasser]. "Automated Reasoning Embedded in Question Answering / Björn Pelzer." Koblenz : Universitätsbibliothek Koblenz, 2013. http://d-nb.info/1034623281/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Sharpe, David. "The AllPaths automated reasoning procedure with clause trees." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq23835.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Buffett, Scott. "Investigating iterative deepening in top-down automated reasoning." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ38363.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Smyth, Ben. "Formal verification of cryptographic protocols with automated reasoning." Thesis, University of Birmingham, 2011. http://etheses.bham.ac.uk//id/eprint/1604/.

Full text
Abstract:
Cryptographic protocols form the backbone of our digital society. Unfortunately, the security of numerous critical components has been neglected. As a consequence, attacks have resulted in financial loss, violations of personal privacy, and threats to democracy. This thesis aids the secure design of cryptographic protocols and facilitates the evaluation of existing schemes. Developing a secure cryptographic protocol is game-like in nature, and a good designer will consider attacks against key components. Unlike games, however, an adversary is not governed by the rules and may deviate from expected behaviours. Secure cryptographic protocols are therefore notoriously difficult to define. Accordingly, cryptographic protocols must be scrutinised by experts using procedures that can evaluate security properties. This thesis advances verification techniques for cryptographic protocols using formal methods with an emphasis on automation. The key contributions are threefold. Firstly, a definition of election verifability for electronic voting protocols is presented; secondly, a definition of user-controlled anonymity for Direct Anonymous Attestation is delivered; and, finally, a procedure to automatically evaluate observational equivalence is introduced. This work enables security properties of cryptographic protocols to be studied. In particular, we evaluate security in electronic voting protocols and Direct Anonymous Attestation schemes; discovering, and fixing, a vulnerability in the RSA-based Direct Anonymous Attestation protocol. Ultimately, this thesis will help avoid the current situation whereby numerous cryptographic protocols are deployed and found to be insecure.
APA, Harvard, Vancouver, ISO, and other styles
38

Castellini, Claudio. "Automated reasoning in quantified modal and temporal logics." Thesis, University of Edinburgh, 2005. http://hdl.handle.net/1842/753.

Full text
Abstract:
This thesis is about automated reasoning in quantified modal and temporal logics, with an application to formal methods. Quantified modal and temporal logics are extensions of classical first-order logic in which the notion of truth is extended to take into account its necessity or equivalently, in the temporal setting, its persistence through time. Due to their high complexity, these logics are less widely known and studied than their propositional counterparts. Moreover, little so far is known about their mechanisability and usefulness for formal methods. The relevant contributions of this thesis are threefold: firstly, we devise a sound and complete set of sequent calculi for quantified modal logics; secondly, we extend the approach to the quantified temporal logic of linear, discrete time and develop a framework for doing automated reasoning via Proof Planning in it; thirdly, we show a set of experimental results obtained by applying the framework to the problem of Feature Interactions in telecommunication systems. These results indicate that (a) the problem can be concisely and effectively modeled in the aforementioned logic, (b) proof planning actually captures common structures in the related proofs, and (c) the approach is viable also from the point of view of efficiency.
APA, Harvard, Vancouver, ISO, and other styles
39

Bennett, Brandon. "Logical representations for automated reasoning about spatial relationships." Thesis, University of Leeds, 1997. http://etheses.whiterose.ac.uk/1271/.

Full text
Abstract:
This thesis investigates logical representations for describing and reasoning about spatial situations. Previously proposed theories of spatial regions are investigated in some detail - especially the 1st-order theory of Randell, Cui and Cohn (1992). The difficulty of achieving effective automated reasoning with these systems is observed. A new approach is presented, based on encoding spatial relations in formulae of 0-order ('propositional') logics. It is proved that entailment, which is valid according to the standard semantics for these logics, is also valid with respect to the spatial interpretation. Consequently, well-known mechanisms for propositional reasoning can be applied to spatial reasoning. Specific encodings of topological relations into both the modal logic S4 and the intuitionistic propositional calculus are given. The complexity of reasoning using the intuitionistic representation is examined and a procedure is presented with is shown to be of O(n3) complexity in the number of relations involved. In order to make this kind of representation sufficiently expressive the concepts of model constraint and entailment constraint are introduced. By means of this distinction a 0-order formula may be used either to assert or to deny that a certain spatial constraint holds of some situation. It is shown how the proof theory of a 0-order logical language can be extended by a simple meta-level generalisation to accommodate a representation involving these two types of formula. A number of other topics are dealt with: a decision procedure based on quantifier elimination is given for a large class of formulae within a 1st-order topological language; reasoning mechanisms based on the composition of spatial relations are studied; the non-topological property of convexity is examined both from the point of view of its 1st-order characterisation and its incorporation into a 0-order spatial logic. It is suggested that 0-order representations could be employed in a similar manner to encode other spatial concepts.
APA, Harvard, Vancouver, ISO, and other styles
40

Merry, Alexander. "Reasoning with !-graphs." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:416c2e6d-2932-4220-8506-50e6b403b660.

Full text
Abstract:
The aim of this thesis is to present an extension to the string graphs of Dixon, Duncan and Kissinger that allows the finite representation of certain infinite families of graphs and graph rewrite rules, and to demonstrate that a logic can be built on this to allow the formalisation of inductive proofs in the string diagrams of compact closed and traced symmetric monoidal categories. String diagrams provide an intuitive method for reasoning about monoidal categories. However, this does not negate the ability for those using them to make mistakes in proofs. To this end, there is a project (Quantomatic) to build a proof assistant for string diagrams, at least for those based on categories with a notion of trace. The development of string graphs has provided a combinatorial formalisation of string diagrams, laying the foundations for this project. The prevalence of commutative Frobenius algebras (CFAs) in quantum information theory, a major application area of these diagrams, has led to the use of variable-arity nodes as a shorthand for normalised networks of Frobenius algebra morphisms, so-called "spider notation". This notation greatly eases reasoning with CFAs, but string graphs are inadequate to properly encode this reasoning. This dissertation firstly extends string graphs to allow for variable-arity nodes to be represented at all, and then introduces !-box notation – and structures to encode it – to represent string graph equations containing repeated subgraphs, where the number of repetitions is abitrary. This can be used to represent, for example, the "spider law" of CFAs, allowing two spiders to be merged, as well as the much more complex generalised bialgebra law that can arise from two interacting CFAs. This work then demonstrates how we can reason directly about !-graphs, viewed as (typically infinite) families of string graphs. Of particular note is the presentation of a form of graph-based induction, allowing the formal encoding of proofs that previously could only be represented as a mix of string diagrams and explanatory text.
APA, Harvard, Vancouver, ISO, and other styles
41

Pendergraft, James O. "Planning with hypothetical reasoning." Thesis, Virginia Tech, 1988. http://hdl.handle.net/10919/44687.

Full text
Abstract:

A planner driven by a causal theory and based on hypothetical reasoning is constructed and discussed. The task is approached from the fundamentals of time and event logics, and causality, resulting in a planner suitable for modeling a wide variety of realistic problem domains, and capable of reasoning in an intuitive manner about dynamic domains. The underlying causal theory drives the planning process directly and, in conjunction with the uniform representation of time and causal facts, allows elegant solutions to planning problems. A new type of planning problem, the indirect goal problem, is identified and solved It is also shown that previous planners cannot solve this type of problem. The frame problem is discussed in detail, and given a computational definition, suitable for allowing objective comparison between different approaches. The hypothetical reasoning approach is shown to allow an elegant solution to the frame problem appropriate for planning systems.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
42

Goble, Tiffany Danielle. "Automate Reasoning: Computer Assisted Proofs in Set Theory Using Godel's Algorithm for Class Formation." Thesis, Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4767.

Full text
Abstract:
Automated reasoning, and in particular automated theorem proving, has become a very important research field within the world of mathematics. Besides being used to verify proofs of theorems, it has also been used to discover proofs of theorems which were previously open problems. In this thesis, an automated reasoning assistant based on Godel's class theory is used to deduce several theorems.
APA, Harvard, Vancouver, ISO, and other styles
43

Klinov, Pavel. "Practical reasoning in probabilistic description logic." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/practical-reasoning-in-probabilistic-description-logic(6aff2ad0-dc76-44cf-909b-2134f580f29b).html.

Full text
Abstract:
Description Logics (DLs) form a family of languages which correspond to decidable fragments of First-Order Logic (FOL). They have been overwhelmingly successful for constructing ontologies - conceptual structures describing domain knowledge. Ontologies proved to be valuable in a range of areas, most notably, bioinformatics, chemistry, Health Care and Life Sciences, and the Semantic Web.One limitation of DLs, as fragments of FOL, is their restricted ability to cope with various forms of uncertainty. For example, medical knowledge often includes statistical relationships, e.g., findings or results of clinical trials. Currently it is maintained separately, e.g., in Bayesian networks or statistical models. This often hinders knowledge integration and reuse, leads to duplication and, consequently, inconsistencies.One answer to this issue is probabilistic logics which allow for smooth integration of classical, i.e., expressible in standard FOL or its sub-languages, and uncertain knowledge. However, probabilistic logics have long been considered impractical because of discouraging computational properties. Those are mostly due to the lack of simplifying assumptions, e.g., independence assumptions which are central to Bayesian networks.In this thesis we demonstrate that deductive reasoning in a particular probabilistic DL, called P-SROIQ, can be computationally practical. We present a range of novel algorithms, in particular, the probabilistic satisfiability procedure (PSAT) which is, to our knowledge, the first scalable PSAT algorithm for a non-propositional probabilistic logic. We perform an extensive performance and scalability evaluation on different synthetic and natural data sets to justify practicality.In addition, we study theoretical properties of P-SROIQ by formally translating it into a fragment of first-order logic of probability. That allows us to gain a better insight into certain important limitations of P-SROIQ. Finally, we investigate its applicability from the practical perspective, for instance, use it to extract all inconsistencies from a real rule-based medical expert system.We believe the thesis will be of interest to developers of probabilistic reasoners. Some of the algorithms, e.g., PSAT, could also be valuable to the Operations Research community since they are heavily based on mathematical programming. Finally, the theoretical analysis could be helpful for designers of future probabilistic logics.
APA, Harvard, Vancouver, ISO, and other styles
44

Bell, J. "Predictive conditionals, nonmonotonicity and reasoning about the future." Thesis, University of Essex, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Jamnik, Mateja. "Automating diagrammatic proofs of arithmetic arguments." Thesis, University of Edinburgh, 1999. http://hdl.handle.net/1842/529.

Full text
Abstract:
This thesis is on the automation of diagrammatic proofs, a novel approach to mechanised mathematical reasoning. Theorems in automated theorem proving are usually proved by formal logical proofs. However, there are some conjectures which humans can prove by the use of geometric operations on diagrams that somehow represent these conjectures, so called diagrammatic proofs. Insight is often more clearly perceived in these diagrammatic proofs than in the algebraic proofs. We are investigating and automating such diagrammatic reasoning about mathematical theorems. Concrete rather than general diagrams are used to prove ground instances of a universally quantified theorem. The diagrammatic proof in constructed by applying geometric operations to the diagram. These operations are in the inference steps of the proof. A general schematic proof is extracted from the ground instances of a proof. it is represented as a recursive program that consists of a general number of applications of geometric operations. When gien a particular diagram, a schematic proof generates a proof for that diagram. To verify that the schematic proof produces a correct proof of the conjecture for each ground instance we check its correctness in a theory of diagrams. We use the constructive omega-rule and schematic proofs to make a translation from concrete instances to a general argument about the diagrammatic proof. The realisation of our ideas is a diagrammatic reasoning system DIAMOND. DIAMOND allows a user to interactively construct instances of a diagrammatic proof. It then automatically abstracts these into a general schematic proof and checks the correctness of this proof using an inductive theorem prover.
APA, Harvard, Vancouver, ISO, and other styles
46

Wong, Fai. "Case-based reasoning (CBR) supporting P.O. Box Automation System." Thesis, University of Macau, 1999. http://umaclib3.umac.mo/record=b1636998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Farooque, Mahfuza. "Automated Reasoning Techniques as Proof-search in Sequent Calculus." Palaiseau, Ecole polytechnique, 2013. http://pastel.archives-ouvertes.fr/docs/00/96/13/44/PDF/Farooque.pdf.

Full text
Abstract:
Le raisonnement assisté par ordinateur joue un rôle crucial en informatique et en logique mathématique, de la programmation logique à la déduction automatique, en passant par les assistants à la démonstration. Le but de cette thèse est la conception d'un cadre général où différentes techniques de raisonnement assisté par ordinateur peuvent être implémentées, pour que ces dernières puissent collaborer, être généralisées, et être implémentées de manière plus sûre. Le cadre que je propose est un calcul des séquents appelé LKp(T), qui généralise un système de la littérature à la présence d'une théorie pour laquelle nous avons une procédure de décision, comme l'arithmétique linéaire. Cette thèse développe la méta-théorie de LKp(T), avec par exemple la propriété de complétude logique. Nous montrons ensuite comment le système spécifie une procédure de recherche de preuve qui émule une technique connue du domaine de la Satisfiabilité-modulo-théories appelée DPLL(T). Enfin, les tableaux de clauses et les tableaux de connexions sont d'autres techniques populaires en déduction automatique, d'une nature relativement différente de DPLL. Cette thèse décrit donc également comment ces techniques de tableaux peuvent être décrites en termes de recherche de preuve dans LKp(T). La simulation est donnée à la fois pour la logique propositionnelle et la logique du premier ordre, ce qui ouvre de nouvelles perspectives de généralisation et de collaboration entre les techniques de tableaux et DPLL, même en présence d'une théorie
Computer-aided reasoning plays a great role in computer science and mathematical logic, from logic programing to automated reasoning, via interactive proof assistants, etc. The general aim of this thesis is to design a general framework where various techniques of Computer-aided reasoning can be implemented, so that they can collaborate, be generalised, and implemented in a safe and trusted way. The framework I propose is a sequent calculus called LKp(T), which generalises an older calculus of the literature to the presence of an arbitrary background theory for which we have a decision procedure, like linear arithmetic. The thesis develops the meta-theory of LKp(T), such as its logical completeness. We then show how it specifies a proof-search procedure that can emulate a well-known technique from the field of Satisfiability-modulo-Theories, namely DPLL(T). Finally, clause and connection tableaux are other widely used techniques of automated reasoning, of a rather different nature from that of DPLL. This thesis also described how such tableaux techniques can be described as bottom-up proof-search in LKp(T). The simulation is given for both propositional and first-order logic, opening up new perspectives of generalisation and collaboration between tableaux techniques and DPLL, even in presence of a background theory
APA, Harvard, Vancouver, ISO, and other styles
48

Dellis, Nelson Charles. "Using Controlled Natural Language for World Knowledge Reasoning." Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_theses/48.

Full text
Abstract:
Search engines are the most popular tools for finding answers to questions, but unfortunately they do not always provide complete direct answers. Answers often need to be extracted by the user, from the web pages returned by the search engine. This research addresses this problem, and shows how an automated theorem prover, combined with existing ontologies and the web, is able to reason about world knowledge and return direct answers to users' questions. The use of an automated theorem prover also allows more complex questions to be asked. Automated theorem provers that exhibit these capabilities are called World Knowledge Reasoning systems. This research discusses one such system, the CNL-WKR system. The CNL-WKR system uses the ACE controlled natural language as its user-input language. It then calls upon external sources on the web, as well as internal ontological sources, during the theorem proving process, in order to find answers. The system uses the automated theorem prover, SPASS-XDB. The result is a system that is capable of answering complex questions about the world.
APA, Harvard, Vancouver, ISO, and other styles
49

Chan, Michael. "Ontology evolution in physics." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/7907.

Full text
Abstract:
With the advent of reasoning problems in dynamic environments, there is an increasing need for automated reasoning systems to automatically adapt to unexpected changes in representations. In particular, the automation of the evolution of their ontologies needs to be enhanced without substantially sacrificing expressivity in the underlying representation. Revision of beliefs is not enough, as adding to or removing from beliefs does not change the underlying formal language. General reasoning systems employed in such environments should also address situations in which the language for representing knowledge is not shared among the involved entities, e.g., the ontologies in a multi-ontology environment or the agents in a multi-agent environment. Our techniques involve diagnosis of faults in existing, possibly heterogeneous, ontologies and then resolution of these faults by manipulating the signature and/or the axioms. This thesis describes the design, development and evaluation of GALILEO (Guided Analysis of Logical Inconsistencies Lead to Evolution of Ontologies), a system designed to detect conflicts in highly expressive ontologies and resolve the detected conflicts by performing appropriate repair operations. The integrated mechanism that handles ontology evolution is able to distinguish between various types of conflicts, each corresponding to a unique kind of ontological fault. We apply and develop our techniques in the domain of Physics. This an excellent domain because many of its seminal advances can be seen as examples of ontology evolution, i.e. changing the way that physicists perceive the world, and case studies are well documented – unlike many other domains. Our research covers analysing a wide ranging development set of case studies and evaluating the performance of the system on a test set. Because the formal representations of most of the case studies are non-trivial and the underlying logic has a high degree of expressivity, we face some tricky technical challenges, including dealing with the potentially large number of choices in diagnosis and repair. In order to enhance the practicality and the manageability of the ontology evolution process, GALILEO incorporates the functionality of generating physically meaningful diagnoses and repairs and, as a result, narrowing the search space to a manageable size.
APA, Harvard, Vancouver, ISO, and other styles
50

Morettin, Paolo. "Learning and Reasoning in Hybrid Structured Spaces." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/264203.

Full text
Abstract:
Many real world AI applications involve reasoning on both continuous and discrete variables, while requiring some level of symbolic reasoning that can provide guarantees on the system's behaviour. Unfortunately, most of the existing probabilistic models do not efficiently support hard constraints or they are limited to purely discrete or continuous scenarios. Weighted Model Integration (WMI) is a recent and general formalism that enables probabilistic modeling and inference in hybrid structured domains. A difference of WMI-based inference algorithms with respect to most alternatives is that probabilities are computed inside a structured support involving both logical and algebraic relationships between variables. While some progress has been made in the last years and the topic is increasingly gaining interest from the community, research in this area is at an early stage. These aspects motivate the study of hybrid and symbolic probabilistic models and the development of scalable inference procedures and effective learning algorithms in these domains. This PhD Thesis embodies my effort in studying scalable reasoning and learning techniques in the context of WMI.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography