To see the other types of publications on this topic, follow the link: Computational reasoning.

Dissertations / Theses on the topic 'Computational reasoning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computational reasoning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zanuttini, Bruno. "Computational Aspects of Learning, Reasoning, and Deciding." Habilitation à diriger des recherches, Université de Caen, 2011. http://tel.archives-ouvertes.fr/tel-00995250.

Full text
Abstract:
We present results and research projects about the computational aspects of classical problems in Artificial Intelligence. We are interested in the setting of agents able to describe their environment through a possibly huge number of Boolean descriptors, and to act upon this environment. The typical applications of this kind of studies are to the design of autonomous robots (for exploring unknown zones, for instance) or of software assistants (for scheduling, for instance). The ultimate goal of research in this domain is the design of agents able to learn autonomously, by learning and interacting with their environment (including human users), also able to reason for producing new pieces of knowledge, for explaining observed phenomena, and finally, able to decide on which action to take at any moment, in a rational fashion. Ideally, such agents will be fast, efficient as soon as they start to interact with their environment, they will improve their behavior as time goes by, and they will be able to communicate naturally with humans. Among the numerous research questions raised by these objectives, we are especially interested in concept and preference learning, in reinforcement learning, in planning, and in some underlying problems in complexity theory. A particular attention is paid to interaction with humans and to huge numbers of descriptors of the environment, as are necessary in real-world applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Pease, Alison. "A computational model of Lakatos-style reasoning." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/2113.

Full text
Abstract:
Lakatos outlined a theory of mathematical discovery and justification, which suggests ways in which concepts, conjectures and proofs gradually evolve via interaction between mathematicians. Different mathematicians may have different interpretations of a conjecture, examples or counterexamples of it, and beliefs regarding its value or theoremhood. Through discussion, concepts are refined and conjectures and proofs modified. We hypothesise that: (i) it is possible to computationally represent Lakatos's theory, and (ii) it is useful to do so. In order to test our hypotheses we have developed a computational model of his theory. Our model is a multiagent dialogue system. Each agent has a copy of a pre-existing theory formation system, which can form concepts and make conjectures which empirically hold for the objects of interest supplied. Distributing the objects of interest between agents means that they form different theories, which they communicate to each other. Agents then find counterexamples and use methods identified by Lakatos to suggest modifications to conjectures, concept definitions and proofs. Our main aim is to provide a computational reading of Lakatos's theory, by interpreting it as a series of algorithms and implementing these algorithms as a computer program. This is the first systematic automated realisation of Lakatos's theory. We contribute to the computational philosophy of science by interpreting, clarifying and extending his theory. We also contribute by evaluating his theory, using our model to test hypotheses about it, and evaluating our extended computational theory on the basis of criteria proposed by several theorists. A further contribution is to automated theory formation and automated theorem proving. The process of refining conjectures, proofs and concept definitions requires a flexibility which is inherently useful in fields which handle ill-specified problems, such as theory formation. Similarly, the ability to automatically modify an open conjecture into one which can be proved, is a valuable contribution to automated theorem proving.
APA, Harvard, Vancouver, ISO, and other styles
3

Sanchez, Roberto. "Improving Computational Efficiency in Context-Based Reasoning Simulations." Honors in the Major Thesis, University of Central Florida, 2003. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/416.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf
Bachelors
Engineering and Computer Science
Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
4

Broxvall, Mathias. "A Study in the Computational Complexity of Temporal Reasoning." Doctoral thesis, Linköping : Univ, 2002. http://www.ep.liu.se/diss/science_technology/07/79/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Griffith, Todd W. "A computational theory of generative modeling in scientific reasoning." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/8177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wong, Yiu Kwong. "Application of computational models and qualitative reasoning to economics." Thesis, Heriot-Watt University, 1996. http://hdl.handle.net/10399/688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schoter, Andreas. "The computational application of bilattice logic to natural reasoning." Thesis, University of Edinburgh, 1996. http://hdl.handle.net/1842/434.

Full text
Abstract:
Chapter 1 looks at natural reasoning. It begins by considering the inferences that people make, particularly in terms of how those inferences differ from what is sanctioned by classical logic. I then consider the role of logic in relation to psychology and compare this relationship with the competence/performance distinction from syntax. I discuss four properties of natural reasoning that I believe are key to any theory: specifically partially, paraconsistancy, relevance and defeasibility. I then discuss whether these are semantic properties or pragmatic ones, and conclude by describing a new view of logic and inference prevalent in some contemporary writings. Chapter 2 looks at some of the existing formal approaches to the four properties. For each property I present the basic idea in formal terms, and then discuss a number of systems from the literature. Each section concludes with a brief discussion of the importance of the given property in the field of computation. Chapter 3 develops the formal system used in this thesis. this is an evidential, bilattice-based logic (EBL). I begin by presenting the mathematical preliminaries, and then show how the four properties of natural reasoning can be captured. The details of the logic itself are presented, beginning with the syntax and then moving on to the semantics. The role of pragmatic inferences in the logic is considered and a formal solution is advanced. I conclude by comparing EBL to some of the logics discussed in Chapter 2. Chapter 4 rounds off Part 1 by considering the implementation of the logic and some of it's computational properties. It begins by considering the application of evidential bilattice logic to logic programming; it extends Fitting's work in this area to construct a programming language, QLOG2. I give some examples of this language in use. The QLOG2 language is then used as a part of the implementation of the EBL system itself: I describe the details of this Implementation and then give some examples of the system in use. The chapter concludes by giving an informal presentation of some basic complexity results for logical closure in EBL, based on the given implementation. Chapter 5 presents some interesting data from linguistics that reflects some of the principles of natural reasoning; in particular I concentrate on implicatures and presupposition. I begin by describing the data and then consider a number of approaches from both the logical and the linguistic literature. Chapter 6 uses the logic developed in Chapter 3 to analyse the data presented in Chapter 5. I consider the basic inference cases, and then move on to more complex examples involving contextual interactions. The results are quite successful, and add weight to Mercer's quest for a common logical semantics for entailment and presupposition. All of the examples considered in this chapter can be handled by the implemented system described in Chapter 4. Finally, Chapter 7 rounds off by presenting some further areas of research that have been raised by this investigation. In particular, the issues of quantification and modality are discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Hatzilygeroudis, Ioannis. "Integrating logic and objects for knowledge representation and reasoning." Thesis, University of Nottingham, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aronoff, Caroline Bradley. "A computational characterization of domain-based causal reasoning development in children." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119744.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 71-72).
To better understand human intelligence, we must first understand how humans use and learn from stories. One important aspect of how humans learn from stories is our ability to reason about cause and eect. Psychological evidence suggests that when children develop the ability to learn cause-and-eect relationships from stories, they do so in discrete stages where each new stage enables the child to incorporate new kinds of information. In this thesis, I attempt to shed light on the mechanisms that underlie the development of causal reasoning in children. I create a behavior-level model, an explanatory theory, and an explanation-level model that account for the developmental stages. I implement these models on top of the Genesis Story Understanding System. The result is a psychologically plausible explanation-level model that captures the observed causal reasoning behaviors of children at dierent stages of developments. The model also takes the observations from psychological evidence to another level by proposing mechanisms that enable such development in children.
by Caroline Bradley Aronoff.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
10

Fischer, Olivier. "Cognitively plausible heuristics to tackle the computational complexity of abductive reasoning /." The Ohio State University, 1991. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487694389394369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Vyshemirsky, Vladislav. "Probabilistic reasoning and inference for systems biology." Thesis, Connect to e-thesis. Move to record for print version, 2007. http://theses.gla.ac.uk/47/.

Full text
Abstract:
Thesis (Ph.D.) - University of Glasgow, 2007.
Ph.D. thesis submitted to the Information and Mathematical Sciences Faculty, Department of Computing Science, University of Glasgow, 2007. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
12

Roberts, David L. "Computational techniques for reasoning about and shaping player experiences in interactive narratives." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33910.

Full text
Abstract:
Interactive narratives are marked by two characteristics: 1) a space of player interactions, some subset of which are specified as aesthetic goals for the system; and 2) the affordance for players to express self-agency and have meaningful interactions. As a result, players are (often unknowing) participants in the creation of the experience. They cannot be assumed to be cooperative, nor adversarial. Thus, we must provide paradigms to designers that enable them to work with players to co-create experiences without transferring the system's goals (specified by authors) to players and without systems having a model of players' behaviors. This dissertation formalizes compact representations and efficient algorithms that enable computer systems to represent, reason about, and shape player experiences in interactive narratives. Early work on interactive narratives relied heavily on "script-and-trigger" systems, requiring sizable engineering efforts from designers to provide concrete instructions for when and how systems can modify an environment to provide a narrative experience for players. While there have been advances in techniques for representing and reasoning about narratives at an abstract level that automate the trigger side of script-and-trigger systems, few techniques have reduced the need for scripting system adaptations or reconfigurations---one of the contributions of this dissertation. We first describe a decomposition of the design process for interactive narrative into three technical problems: goal selection, action/plan selection/generation, and action/plan refinement. This decomposition allows techniques to be developed for reasoning about the complete implementation of an interactive narrative. We then describe representational and algorithmic solutions to these problems: a Markov Decision Process-based formalism for goal selection, a schema-based planning architecture using theories of influence from social psychology for action/plan selection/generation, and a natural language-based template system for action/plan refinement. To evaluate these techniques, we conduct simulation experiments and human subjects experiments in an interactive story. Using these techniques realizes the following three goals: 1) efficient algorithmic support for authoring interactive narratives; 2) design a paradigm for AI systems to reason and act to shape player experiences based on author-specified aesthetic goals; and 3) accomplish (1) and (2) with players feeling more engaged and without perceiving a decrease in self-agency.
APA, Harvard, Vancouver, ISO, and other styles
13

Multmeier, Jan [Verfasser]. "Representations facilitate Bayesian reasoning : computational facilitation and ecological design revisited / Jan Multmeier." Berlin : Freie Universität Berlin, 2012. http://d-nb.info/1029955263/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Atkinson, Katie Marie. "What should we do? : computational representation of persuasive argument in practical reasoning." Thesis, University of Liverpool, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.426134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Costa, Leite Manuel da. "Hypothetical reasoning in scientific discovery contexts : a preliminary cognitive science-motivated analysis." Thesis, University of Sussex, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Williams, Clive Richard. "ATLAS : a natural language understanding system." Thesis, University of Bristol, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Harris, M. R. "Computational modelling of transitive inference : a microanalysis of a simple form of reasoning." Thesis, University of Edinburgh, 1988. http://hdl.handle.net/1842/18943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Fernández, Gil Oliver. "Adding Threshold Concepts to the Description Logic EL." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-204523.

Full text
Abstract:
We introduce a family of logics extending the lightweight Description Logic EL, that allows us to define concepts in an approximate way. The main idea is to use a graded membership function m, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C~t for ~ in {<,<=,>,>=} then collect all the individuals that belong to C with degree ~t. We further study this framework in two particular directions. First, we define a specific graded membership function deg and investigate the complexity of reasoning in the resulting Description Logic tEL(deg) w.r.t. both the empty terminology and acyclic TBoxes. Second, we show how to turn concept similarity measures into membership degree functions. It turns out that under certain conditions such functions are well-defined, and therefore induce a wide range of threshold logics. Last, we present preliminary results on the computational complexity landscape of reasoning in such a big family of threshold logics.
APA, Harvard, Vancouver, ISO, and other styles
19

Ciatto, Giovanni <1992&gt. "On the role of Computational Logic in Data Science: representing, learning, reasoning, and explaining knowledge." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10192/1/phd-thesis-1.1.0%2B2022-04-17-10-08.pdf.

Full text
Abstract:
In this thesis we discuss in what ways computational logic (CL) and data science (DS) can jointly contribute to the management of knowledge within the scope of modern and future artificial intelligence (AI), and how technically-sound software technologies can be realised along the path. An agent-oriented mindset permeates the whole discussion, by stressing pivotal role of autonomous agents in exploiting both means to reach higher degrees of intelligence. Accordingly, the goals of this thesis are manifold. First, we elicit the analogies and differences among CL and DS, hence looking for possible synergies and complementarities along 4 major knowledge-related dimensions, namely representation, acquisition (a.k.a. learning), inference (a.k.a. reasoning), and explanation. In this regard, we propose a conceptual framework through which bridges these disciplines can be described and designed. We then survey the current state of the art of AI technologies, w.r.t. their capability to support bridging CL and DS in practice. After detecting lacks and opportunities, we propose the notion of logic ecosystem as the new conceptual, architectural, and technological solution supporting the incremental integration of symbolic and sub-symbolic AI. Finally, we discuss how our notion of logic ecosys- tem can be reified into actual software technology and extended towards many DS-related directions.
APA, Harvard, Vancouver, ISO, and other styles
20

Meikle, Laura Isabel. "Intuition in formal proof : a novel framework for combining mathematical tools." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/9663.

Full text
Abstract:
This doctoral thesis addresses one major difficulty in formal proof: removing obstructions to intuition which hamper the proof endeavour. We investigate this in the context of formally verifying geometric algorithms using the theorem prover Isabelle, by first proving the Graham’s Scan algorithm for finding convex hulls, then using the challenges we encountered as motivations for the design of a general, modular framework for combining mathematical tools. We introduce our integration framework — the Prover’s Palette, describing in detail the guiding principles from software engineering and the key differentiator of our approach — emphasising the role of the user. Two integrations are described, using the framework to extend Eclipse Proof General so that the computer algebra systems QEPCAD and Maple are directly available in an Isabelle proof context, capable of running either fully automated or with user customisation. The versatility of the approach is illustrated by showing a variety of ways that these tools can be used to streamline the theorem proving process, enriching the user’s intuition rather than disrupting it. The usefulness of our approach is then demonstrated through the formal verification of an algorithm for computing Delaunay triangulations in the Prover’s Palette.
APA, Harvard, Vancouver, ISO, and other styles
21

Berreby, Fiona. "Models of Ethical Reasoning." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS137.

Full text
Abstract:
Cette thèse s’inscrit dans le cadre du projet ANR eThicAa, dont les ambitions ont été : de définir ce que sont des agents autonomes éthiques, de produire des représentations formelles des conflits éthiques et de leurs objets (au sein d’un seul agent autonome, entre un agent autonome et le système auquel il appartient, entre un agent autonome et un humain, entre plusieurs agents autonomes) et d’élaborer des algorithmes d’explication pour les utilisateurs humains. L’objet de la thèse plus particulièrement a été d’étudier la modélisation de conflits éthiques au sein d’un seul agent, ainsi que la production d’algorithmes explicatifs. Ainsi, le travail présenté ici décrit l’utilisation de langages de haut niveau dans la conception d’agents autonomes éthiques. Il propose un cadre logique nouveau et modulaire pour représenter et raisonner sur une variété de théories éthiques, sur la base d’une version modifiée du calcul des événements, implémentée en Answer Set Programming. Le processus de prise de décision éthique est conçu comme une procédure en plusieurs étapes, capturée par quatre types de modèles interdépendants qui permettent à l’agent d’évaluer son environnement, de raisonner sur sa responsabilité et de faire des choix éthiquement informés. En particulier, un modèle d’action permet à l’agent de représenter des scénarios et les changements qui s’y déroulent, un modèle causal piste les conséquences des décisions prises dans les scénarios, rendant possible un raisonnement sur la responsabilité et l’imputabilité des agents, un modèle du Bien donne une appréciation de la valeur éthique intrinsèque de finalités ou d’évènements, un modèle du Juste détermine les décisions acceptables selon des circonstances données. Le modèle causal joue ici un rôle central, car il permet d’identifier des propriétés que supposent les relations causales et qui déterminent comment et dans quelle mesure il est possible d’en inférer des attributions de responsabilité. Notre ambition est double. Tout d’abord, elle est de permettre la représentation systématique d’un nombre illimité de processus de raisonnements éthiques, à travers un cadre adaptable et extensible en vertu de sa hiérarchisation et de sa syntaxe standardisée. Deuxièmement, elle est d’éviter l’écueil de certains travaux d’éthique computationnelle qui directement intègrent l’information morale dans l’engin de raisonnement général sans l’expliciter – alimentant ainsi les agents avec des réponses atomiques qui ne représentent pas la dynamique sous-jacente. Nous visons à déplacer de manière globale le fardeau du raisonnement moral du programmeur vers le programme lui-même
This thesis is part of the ANR eThicAa project, which has aimed to define moral autonomous agents, provide a formal representation of ethical conflicts and of their objects (within one artificial moral agent, between an artificial moral agent and the rules of the system it belongs to, between an artificial moral agent and a human operator, between several artificial moral agents), and design explanation algorithms for the human user. The particular focus of the thesis pertains to exploring ethical conflicts within a single agent, as well as designing explanation algorithms. The work presented here investigates the use of high-level action languages for designing such ethically constrained autonomous agents. It proposes a novel and modular logic-based framework for representing and reasoning over a variety of ethical theories, based on a modified version of the event calculus and implemented in Answer Set Programming. The ethical decision-making process is conceived of as a multi-step procedure captured by four types of interdependent models which allow the agent to represent situations, reason over accountability and make ethically informed choices. More precisely, an action model enables the agent to appraise its environment and the changes that take place in it, a causal model tracks agent responsibility, a model of the Good makes a claim about the intrinsic value of goals or events, and a model of the Right considers what an agent should do, or is most justified in doing, given the circumstances of its actions. The causalmodel plays a central role here, because it permits identifying some properties that causal relations assume and that determine how, as well as to what extent, we may ascribe ethical responsibility on their basis. The overarching ambition of the presented research is twofold. First, to allow the systematic representation of an unbounded number of ethical reasoning processes, through a framework that is adaptable and extensible by virtue of its designed hierarchisation and standard syntax. Second, to avoid the pitfall of some works in current computational ethics that too readily embed moralinformation within computational engines, thereby feeding agents with atomic answers that fail to truly represent underlying dynamics. We aim instead to comprehensively displace the burden of moral reasoning from the programmer to the program itself
APA, Harvard, Vancouver, ISO, and other styles
22

Lee, Yoonhyoung Gordon Peter C. "Linguistic complexity and working memory structure effect of the computational demands of reasoning on syntactic complexity /." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2007. http://dc.lib.unc.edu/u?/etd,797.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2007.
Title from electronic title page (viewed Dec. 18, 2007). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Psychology (Cognitive Psychology)." Discipline: Psychology; Department/School: Psychology.
APA, Harvard, Vancouver, ISO, and other styles
23

Kramdi, Seifeddine. "A modal approach to model computational trust." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30146/document.

Full text
Abstract:
Le concept de confiance est un concept sociocognitif qui adresse la question de l'interaction dans les systèmes concurrents. Quand la complexité d'un système informatique prohibe l'utilisation de solutions traditionnelles de sécurité informatique en amont du processus de développement (solutions dites de type dur), la confiance est un concept candidat, pour le développement de systèmes d'aide à l'interaction. Dans cette thèse, notre but majeur est de présenter une vue d'ensemble de la discipline de la modélisation de la confiance dans les systèmes informatiques, et de proposer quelques modèles logiques pour le développement de module de confiance. Nous adoptons comme contexte applicatif majeur, les applications basées sur les architectures orientées services, qui sont utilisées pour modéliser des systèmes ouverts telle que les applications web. Nous utiliserons pour cela une abstraction qui modélisera ce genre de systèmes comme des systèmes multi-agents. Notre travail est divisé en trois parties, la première propose une étude de la discipline, nous y présentons les pratiques utilisées par les chercheurs et les praticiens de la confiance pour modéliser et utiliser ce concept dans différents systèmes, cette analyse nous permet de définir un certain nombre de points critiques, que la discipline doit aborder pour se développer. La deuxième partie de notre travail présente notre premier modèle de confiance. Cette première solution basée sur un formalisme logique (logique dynamique épistémique), démarre d'une interprétation de la confiance comme une croyance sociocognitive, ce modèle présentera une première modélisation de la confiance. Apres avoir prouvé la décidabilité de notre formalisme. Nous proposons une méthodologie pour inférer la confiance en des actions complexes : à partir de notre confiance dans des actions atomiques, nous illustrons ensuite comment notre solution peut être mise en pratique dans un cas d'utilisation basée sur la combinaison de service dans les architectures orientées services. La dernière partie de notre travail consiste en un modèle de confiance, où cette notion sera perçue comme une spécialisation du raisonnement causal tel qu'implémenté dans le formalisme des règles de production. Après avoir adapté ce formalisme au cas épistémique, nous décrivons trois modèles basés sur l'idée d'associer la confiance au raisonnement non monotone. Ces trois modèles permettent respectivement d'étudier comment la confiance est générée, comment elle-même génère les croyances d'un agent et finalement, sa relation avec son contexte d'utilisation
The concept of trust is a socio-cognitive concept that plays an important role in representing interactions within concurrent systems. When the complexity of a computational system and its unpredictability makes standard security solutions (commonly called hard security solutions) inapplicable, computational trust is one of the most useful concepts to design protocols of interaction. In this work, our main objective is to present a prospective survey of the field of study of computational trust. We will also present two trust models, based on logical formalisms, and show how they can be studied and used. While trying to stay general in our study, we use service-oriented architecture paradigm as a context of study when examples are needed. Our work is subdivided into three chapters. The first chapter presents a general view of the computational trust studies. Our approach is to present trust studies in three main steps. Introducing trust theories as first attempts to grasp notions linked to the concept of trust, fields of application, that explicit the uses that are traditionally associated to computational trust, and finally trust models, as an instantiation of a trust theory, w.r.t. some formal framework. Our survey ends with a set of issues that we deem important to deal with in priority in order to help the advancement of the field. The next two chapters present two models of trust. Our first model is an instantiation of Castelfranchi & Falcone's socio-cognitive trust theory. Our model is implemented using a Dynamic Epistemic Logic that we propose. The main originality of our solution is the fact that our trust definition extends the original model to complex action (programs, composed services, etc.) and the use of authored assignment as a special kind of atomic actions. The use of our model is then illustrated in a case study related to service-oriented architecture. Our second model extends our socio-cognitive definition to an abductive framework that allows us to associate trust to explanations. Our framework is an adaptation of Bochman's production relations to the epistemic case. Since Bochman approach was initially proposed to study causality, our definition of trust in this second model presents trust as a special case of causal reasoning, applied to a social context. We end our manuscript with a conclusion that presents how we would like to extend our work
APA, Harvard, Vancouver, ISO, and other styles
24

Banerjee, Bonny. "Spatial problem solving for diagrammatic reasoning." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1194455860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Xudong. "MODELING, LEARNING AND REASONING ABOUT PREFERENCE TREES OVER COMBINATORIAL DOMAINS." UKnowledge, 2016. http://uknowledge.uky.edu/cs_etds/43.

Full text
Abstract:
In my Ph.D. dissertation, I have studied problems arising in various aspects of preferences: preference modeling, preference learning, and preference reasoning, when preferences concern outcomes ranging over combinatorial domains. Preferences is a major research component in artificial intelligence (AI) and decision theory, and is closely related to the social choice theory considered by economists and political scientists. In my dissertation, I have exploited emerging connections between preferences in AI and social choice theory. Most of my research is on qualitative preference representations that extend and combine existing formalisms such as conditional preference nets, lexicographic preference trees, answer-set optimization programs, possibilistic logic, and conditional preference networks; on learning problems that aim at discovering qualitative preference models and predictive preference information from practical data; and on preference reasoning problems centered around qualitative preference optimization and aggregation methods. Applications of my research include recommender systems, decision support tools, multi-agent systems, and Internet trading and marketing platforms.
APA, Harvard, Vancouver, ISO, and other styles
26

Miranda, Luís Miguel Gonçalves. "Data fusion with computational intelligence techniques: a case study of fuzzy inference for terrain assessment." Master's thesis, Faculdade de Ciências e Tecnologia, 2014. http://hdl.handle.net/10362/12338.

Full text
Abstract:
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
With the constant technology progression is inherent storage of all kinds of data. Satellites, mobile phones, cameras and other type of electronic equipment, produce on daily basis an amount of data of gigantic proportions. These data alone may not convey any meaning and may even be impossible to interpret them without specific auxiliary measures. Data fusion contributes in this issue giving use of these data, processing them into proper knowledge for whom analyzes. Within data fusion there are numerous processing approaches and methodologies, being given here highlight to the one that most resembles to the imprecise human knowledge, the fuzzy reasoning. These method is applied in several areas, inclusively as inference system for hazard detection and avoidance in unmanned space missions. To this is fundamental the use of fuzzy inference systems, where the problem is modeled through a set of linguistic rules, fuzzy sets, membership functions and other information. In this thesis it was developed a fuzzy inference system, for safe landing sites using fusion of maps, and a data visualization tool. Thus, classification and validation of the information are made easier with such tools.
APA, Harvard, Vancouver, ISO, and other styles
27

Abbas, Kaja Moinudeen. "Bayesian Probabilistic Reasoning Applied to Mathematical Epidemiology for Predictive Spatiotemporal Analysis of Infectious Diseases." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5302/.

Full text
Abstract:
Abstract Probabilistic reasoning under uncertainty suits well to analysis of disease dynamics. The stochastic nature of disease progression is modeled by applying the principles of Bayesian learning. Bayesian learning predicts the disease progression, including prevalence and incidence, for a geographic region and demographic composition. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest. A Bayesian network representing the outbreak of influenza and pneumonia in a geographic region is ported to a newer region with different demographic composition. Upon analysis for the newer region, the corresponding prevalence of influenza and pneumonia among the different demographic subgroups is inferred for the newer region. Bayesian reasoning coupled with disease timeline is used to reverse engineer an influenza outbreak for a given geographic and demographic setting. The temporal flow of the epidemic among the different sections of the population is analyzed to identify the corresponding risk levels. In comparison to spread vaccination, prioritizing the limited vaccination resources to the higher risk groups results in relatively lower influenza prevalence. HIV incidence in Texas from 1989-2002 is analyzed using demographic based epidemic curves. Dynamic Bayesian networks are integrated with probability distributions of HIV surveillance data coupled with the census population data to estimate the proportion of HIV incidence among the different demographic subgroups. Demographic based risk analysis lends to observation of varied spectrum of HIV risk among the different demographic subgroups. A methodology using hidden Markov models is introduced that enables to investigate the impact of social behavioral interactions in the incidence and prevalence of infectious diseases. The methodology is presented in the context of simulated disease outbreak data for influenza. Probabilistic reasoning analysis enhances the understanding of disease progression in order to identify the critical points of surveillance, control and prevention. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest.
APA, Harvard, Vancouver, ISO, and other styles
28

Gibson, Andrew P. "Reflective writing analytics and transepistemic abduction." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/106952/1/Andrew_Gibson_Thesis.pdf.

Full text
Abstract:
This thesis presents a model of Reflective Writing Analytics which brings together two distinct ways of knowing: the human world of individuals in society, and the machine world of computers and mathematics. The thesis presents a specialised mode of reasoning called Transepistemic Abduction which provides a way of justifying intuition and heuristic approaches to computational analysis of reflective writing.
APA, Harvard, Vancouver, ISO, and other styles
29

Lee, Jae Hee [Verfasser], Christian [Akademischer Betreuer] Freksa, and Li [Akademischer Betreuer] Sanjiang. "Qualitative Reasoning about Relative Directions : Computational Complexity and Practical Algorithm [[Elektronische Ressource]] / Jae Hee Lee. Gutachter: Christian Freksa ; Li Sanjiang. Betreuer: Christian Freksa." Bremen : Staats- und Universitätsbibliothek Bremen, 2013. http://d-nb.info/1072077922/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Skone, Gwyn S. "Stratagems for effective function evaluation in computational chemistry." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:8843465b-3e5f-45d9-a973-3b27949407ef.

Full text
Abstract:
In recent years, the potential benefits of high-throughput virtual screening to the drug discovery community have been recognized, bringing an increase in the number of tools developed for this purpose. These programs have to process large quantities of data, searching for an optimal solution in a vast combinatorial range. This is particularly the case for protein-ligand docking, since proteins are sophisticated structures with complicated interactions for which either molecule might reshape itself. Even the very limited flexibility model to be considered here, using ligand conformation ensembles, requires six dimensions of exploration - three translations and three rotations - per rigid conformation. The functions for evaluating pose suitability can also be complex to calculate. Consequently, the programs being written for these biochemical simulations are extremely resource-intensive. This work introduces a pure computer science approach to the field, developing techniques to improve the effectiveness of such tools. Their architecture is generalized to an abstract pattern of nested layers for discussion, covering scoring functions, search methods, and screening overall. Based on this, new stratagems for molecular docking software design are described, including lazy or partial evaluation, geometric analysis, and parallel processing implementation. In addition, a range of novel algorithms are presented for applications such as active site detection with linear complexity (PIES) and small molecule shape description (PASTRY) for pre-alignment of ligands. The various stratagems are assessed individually and in combination, using several modified versions of an existing docking program, to demonstrate their benefit to virtual screening in practical contexts. In particular, the importance of appropriate precision in calculations is highlighted.
APA, Harvard, Vancouver, ISO, and other styles
31

Antos, Dimitrios. "Deploying Affect-Inspired Mechanisms to Enhance Agent Decision-Making and Communication." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10107.

Full text
Abstract:
Computer agents are required to make appropriate decisions quickly and efficiently. As the environments in which they act become increasingly complex, efficient decision-making becomes significantly more challenging. This thesis examines the positive ways in which human emotions influence people’s ability to make good decisions in complex, uncertain contexts, and develops computational analogues of these beneficial functions, demonstrating their usefulness in agent decision-making and communication. For decision-making by a single agent in large-scale environments with stochasticity and high uncertainty, the thesis presents GRUE (Goal Re-prioritization Using Emotion), a decision-making technique that deploys emotion-inspired computational operators to dynamically re-prioritize the agent’s goals. In two complex domains, GRUE is shown to result in improved agent performance over many existing techniques. Agents working in groups benefit from communicating and sharing information that would otherwise be unobservable. The thesis defines an affective signaling mechanism, inspired by the beneficial communicative functions of human emotion, that increases coordination. In two studies, agents using the mechanism are shown to make faster and more accurate inferences than agents that do not signal, resulting in improved performance. Moreover, affective signals confer performance increases equivalent to those achieved by broadcasting agents’ entire private state information. Emotions are also useful signals in agents’ interactions with people, influencing people’s perceptions of them. A computer-human negotiation study is presented, in which virtual agents expressed emotion. Agents whose emotion expressions matched their negotiation strategy were perceived as more trustworthy, and they were more likely to be selected for future interactions. In addition, to address similar limitations in strategic environments, this thesis uses the theory of reasoning patters in complex game-theoretic settings. An algorithm is presented that speeds up equilibrium computation in certain classes of games. For Bayesian games, with and without a common prior, the thesis also discusses a novel graphical formalism that allows agents’ possibly inconsistent beliefs to be succinctly represented, and for reasoning patterns to be defined in such games. Finally, the thesis presents a technique for generating advice from a game’s reasoning patterns for human decision-makers, and demonstrates empirically that such advice helps people make better decisions in a complex game.
Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
32

Furno, Domenico. "Hybrid approaches based on computational intelligence and semantic web for distributed situation and context awareness." Doctoral thesis, Universita degli studi di Salerno, 2013. http://hdl.handle.net/10556/927.

Full text
Abstract:
2011 - 2012
The research work focuses on Situation Awareness and Context Awareness topics. Specifically, Situation Awareness involves being aware of what is happening in the vicinity to understand how information, events, and one’s own actions will impact goals and objectives, both immediately and in the near future. Thus, Situation Awareness is especially important in application domains where the information flow can be quite high and poor decisions making may lead to serious consequences. On the other hand Context Awareness is considered a process to support user applications to adapt interfaces, tailor the set of application-relevant data, increase the precision of information retrieval, discover services, make the user interaction implicit, or build smart environments. Despite being slightly different, Situation and Context Awareness involve common problems such as: the lack of a support for the acquisition and aggregation of dynamic environmental information from the field (i.e. sensors, cameras, etc.); the lack of formal approaches to knowledge representation (i.e. contexts, concepts, relations, situations, etc.) and processing (reasoning, classification, retrieval, discovery, etc.); the lack of automated and distributed systems, with considerable computing power, to support the reasoning on a huge quantity of knowledge, extracted by sensor data. So, the thesis researches new approaches for distributed Context and Situation Awareness and proposes to apply them in order to achieve some related research objectives such as knowledge representation, semantic reasoning, pattern recognition and information retrieval. The research work starts from the study and analysis of state of art in terms of techniques, technologies, tools and systems to support Context/Situation Awareness. The main aim is to develop a new contribution in this field by integrating techniques deriving from the fields of Semantic Web, Soft Computing and Computational Intelligence. From an architectural point of view, several frameworks are going to be defined according to the multi-agent paradigm. Furthermore, some preliminary experimental results have been obtained in some application domains such as Airport Security, Traffic Management, Smart Grids and Healthcare. Finally, future challenges is going to the following directions: Semantic Modeling of Fuzzy Control, Temporal Issues, Automatically Ontology Elicitation, Extension to other Application Domains and More Experiments. [edited by author]
XI n.s.
APA, Harvard, Vancouver, ISO, and other styles
33

Johansson, Rebecka. "Programmering som verktyg för lärande i matematik : - En empirisk studie av elevers resonemangsförmåga i två olika undervisningsmiljöer." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-69667.

Full text
Abstract:
Programmering som verktyg för lärande i matematik- En empirisk studie av elevers resonemangsförmåga i två olika undervisningsmiljöer För att förbereda grundskoleelever för denna allt mer digitaliserade värld, infördes programmering i kursplanen för matematik den första juli 2018. Syftet med den här studien var att undersöka om arbete i en programmeringsmiljö kan erbjuda nya möjligheter för lärande i matematik jämfört med en ouppkopplad lärmiljö. Detta undersöktes med hjälp av två för ändamålet särskilt designade lektioner, en där arbetet skedde ouppkopplat och en där arbetet skedde i en programmeringsmiljö. De deltagande eleverna arbetade parvis, och deras arbete observerades med hjälp av både fältanteckningar och ljudinspelningar. Elevernas lärande undersöktes genom att analysera deras matematiska resonemang vid de båda lektionstillfällena. Analysen skedde med hjälp av fyra analysfrågor, och resultatet visar tendenser till en skillnad i elevernas matematiska resonemang vid arbete i de två olika lärmiljöerna. På individnivå pekar resultatet på en variation i vilken av lärmiljöerna som var mest fördelaktig. På gruppnivå var det däremot fler elever som i större utsträckning följde varandras resonemang när de arbetade i programmeringsmiljön. Dessutom visade majoriteten av eleverna på en större uthållighet i att lösa uppgifterna när de arbetade med programmering. Vad dessa skillnader kan bero på diskuteras såväl i samband med studiens resultat som tidigare forskning. Slutsatsen lyder att programmering kan erbjuda elever nya sätt att lära matematik och därför bör användas som ett av flera verktyg i undervisningen.
Programming as a learning tool in mathematics - An empirical study of pupils’ mathematical reasoning in two different educational environments In order to prepare pupils for a more and more digitalised world, programming has been included in the Swedish curriculum for mathematics since July 1, 2018. The purpose of this study was to examine if working in a programming environment, in comparison to an unplugged environment, would offer pupils new opportunities for learning mathematics. This was examined by analysing the mathematical reasoning of the pupils during two different lessons; one where they worked without computers and one where they used computers and worked with block programming. The participating pupils worked in pairs, and the work and process of the pupils was observed and recorded by field notes and audio recordings. The learning opportunities was examined and the pupils' mathematical reasoning during both lessons was analysed. Four questions served as basis for the analysis, and the results showed a difference in the pupils’ mathematical reasoning in the two different learning environments. At an individual level, the results varied as regards which working environment was the most beneficial. At a group level, on the other hand, more of the pupils were able to follow each other’s mathematical reasoning when working in the programming environment. Furthermore, most of the pupils were more perseverant in solving the tasks when working in the programming environment. The possible cause of these differences is discussed in connection to the results of this study as well as to previous research. The conclusion is, a programming environment can offer the pupils new opportunities to learn and should be used as one of many ways to teach mathematics.
APA, Harvard, Vancouver, ISO, and other styles
34

Mendes, Daniel Roque. "The role of system dynamics in the promotion of scientific computation literacy an exploration - comprising an analytical study and an empirical survey - of system dynamics' potential for promoting scientific reasoning and computational thinking, and in modifying the science learner's epistemological commitments, respectively /." [S.l.] : [s.n.], 1999. http://deposit.ddb.de/cgi-bin/dokserv?idn=965211339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Carbin, Michael (Michael James). "Logical reasoning for approximate and unreliable computation." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99813.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 343-350).
Improving program performance and resilience are long-standing goals. Traditional approaches include a variety of transformation, compilation, and runtime techniques that share the common property that the resulting program has the same semantics as the original program. However, researchers have recently proposed a variety of new techniques that set aside this traditional restriction and instead exploit opportunities to change the semantics of programs to improve performance and resilience. Techniques include skipping portions of a program's computation, selecting different implementations of program's subcomputations, executing programs on unreliable hardware, and synthesizing values to enable programs to skip or execute through otherwise fatal errors. A major barrier to the acceptance these techniques in both the broader research community and in industrial practice is the challenge that the resulting programs may exhibit behaviors that differ from that of the original program, potentially jeopardizing the program's resilience, safety, and accuracy. This thesis presents the first general programming systems for precisely verifying and reasoning about the programs that result from these techniques. This thesis presents a programming language and program logic for verifying worst-case properties of a transformed program. Specifically the framework, enables verifying that a transformed program satisfies important assertions about its safety (e.g., that it does not access invalid memory) and accuracy (e.g., that it returns a result within a bounded distance of that of the original program). This thesis also presents a programming language and automated analysis for verifying a program's quantitative reliability - the probability the transformed program returns the same result as the original program - when executed on unreliable hardware. The results of this thesis, which include programming languages, program logics, program analysis, and applications thereof, present the first steps toward reaping the benefits of changing the semantics of programs in a beneficial yet principled way.
by Michael James Carbin.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
36

Magka, Despoina. "Foundations and applications of knowledge representation for structured entities." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:4a3078cc-5770-4a9b-81d4-8bc52b41e294.

Full text
Abstract:
Description Logics form a family of powerful ontology languages widely used by academics and industry experts to capture and intelligently manage knowledge about the world. A key advantage of Description Logics is their amenability to automated reasoning that enables the deduction of knowledge that has not been explicitly stated. However, in order to ensure decidability of automated reasoning algorithms, suitable restrictions are usually enforced on the shape of structures that are expressible using Description Logics. As a consequence, Description Logics fall short of expressive power when it comes to representing cyclic structures, which abound in life sciences and other disciplines. The objective of this thesis is to explore ontology languages that are better suited for the representation of structured objects. It is suggested that an alternative approach which relies on nonmonotonic existential rules can provide a promising candidate for modelling such domains. To this end, we have built a comprehensive theoretical and practical framework for the representation of structured entities along with a surface syntax designed to allow the creation of ontological descriptions in an intuitive way. Our formalism is based on nonmonotonic existential rules and exhibits a favourable balance between expressive power and computational as well as empirical tractability. In order to ensure decidability of reasoning, we introduce a number of acyclicity criteria that strictly generalise many of the existing ones. We also present a novel stratification condition that properly extends `classical' stratification and allows for capturing both definitional and conditional aspects of complex structures. The applicability of our formalism is supported by a prototypical implementation, which is based on an off-the-shelf answer set solver and is tested over a realistic knowledge base. Our experimental results demonstrate improvement of up to three orders of magnitude in comparison with previous evaluation efforts and also expose numerous modelling errors of a manually curated biochemical knowledge base. Overall, we believe that our work lays the practical and theoretical foundations of an ontology language that is well-suited for the representation of structured objects. From a modelling point of view, our approach could stimulate the adoption of a different and expressive reasoning paradigm for which robustly engineered mature reasoners are available; it could thus pave the way for the representation of a broader spectrum of knowledge. At the same time, our theoretical contributions reveal useful insights into logic-based knowledge representation and reasoning. Therefore, our results should be of value to ontology engineers and knowledge representation researchers alike.
APA, Harvard, Vancouver, ISO, and other styles
37

Feng, Lu. "On learning assumptions for compositional verification of probabilistic systems." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:12502ba2-478f-429a-a250-6590c43a8e8a.

Full text
Abstract:
Probabilistic model checking is a powerful formal verification method that can ensure the correctness of real-life systems that exhibit stochastic behaviour. The work presented in this thesis aims to solve the scalability challenge of probabilistic model checking, by developing, for the first time, fully-automated compositional verification techniques for probabilistic systems. The contributions are novel approaches for automatically learning probabilistic assumptions for three different compositional verification frameworks. The first framework considers systems that are modelled as Segala probabilistic automata, with assumptions captured by probabilistic safety properties. A fully-automated approach is developed to learn assumptions for various assume-guarantee rules, including an asymmetric rule Asym for two-component systems, an asymmetric rule Asym-N for n-component systems, and a circular rule Circ. This approach uses the L* and NL* algorithms for automata learning. The second framework considers systems where the components are modelled as probabilistic I/O systems (PIOSs), with assumptions represented by Rabin probabilistic automata (RPAs). A new (complete) assume-guarantee rule Asym-Pios is proposed for this framework. In order to develop a fully-automated approach for learning assumptions and performing compositional verification based on the rule Asym-Pios, a (semi-)algorithm to check language inclusion of RPAs and an L*-style learning method for RPAs are also proposed. The third framework considers the compositional verification of discrete-time Markov chains (DTMCs) encoded in Boolean formulae, with assumptions represented as Interval DTMCs (IDTMCs). A new parallel operator for composing an IDTMC and a DTMC is defined, and a new (complete) assume-guarantee rule Asym-Idtmc that uses this operator is proposed. A fully-automated approach is formulated to learn assumptions for rule Asym-Idtmc, using the CDNF learning algorithm and a new symbolic reachability analysis algorithm for IDTMCs. All approaches proposed in this thesis have been implemented as prototype tools and applied to a range of benchmark case studies. Experimental results show that these approaches are helpful for automating the compositional verification of probabilistic systems through learning small assumptions, but may suffer from high computational complexity or even undecidability. The techniques developed in this thesis can assist in developing scalable verification frameworks for probabilistic models.
APA, Harvard, Vancouver, ISO, and other styles
38

Smith, Marc L. "View-centric reasoning about parallel and distributed computation." Doctoral diss., University of Central Florida, 2000. http://digital.library.ucf.edu/cdm/ref/collection/RTD/id/1597.

Full text
Abstract:
University of Central Florida College of Engineering Thesis
The development of distributed applications has not progressed as rapidly as its enabling technologies. In part, this is due to the difficulty of reasoning about such complex systems. In contrast to sequential systems, parallel systems give rise to parallel events, and the resulting uncertainty of the observed order of these events. Loosely coupled distributed systems complicate this even further by introducing the element of multiple imperfect observers of these parallel events. The goal of this dissertation is to advance parallel and distributed systems development by producing a parameterized model that can be instantiated to reflect the computation and coordination properties of such systems. The result is a model called paraDOS that we show to be general enough to have instantiations of two very distinct distributed computation models, Actors and tuple space. We show how paraDOS allows us to use operational semantics to reason about computation when such reasoning must account for multiple, inconsistent and imperfect views. We then extend the paraDOS model with an abstraction to support composition of communicating computational systems. This extension gives us a tool to reason formally about heterogeneous systems, and about new distributed computing paradigms such as the multiple tuple spaces support seen in Sun's JavaSpaces and IBM's T Spaces.
Ph.D.
Doctorate;
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering and Computer Science
196 p.
xiv, 196 leaves, bound : ill. ; 28 cm.
APA, Harvard, Vancouver, ISO, and other styles
39

Shahwan, Ahmad. "Processing Geometric Models of Assemblies to Structure and Enrich them with Functional Information." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM023/document.

Full text
Abstract:
La maquette numérique d'un produit occupe une position centrale dans le processus de développement de produit. Elle est utilisée comme représentation de référence des produits, en définissant la forme géométrique de chaque composant, ainsi que les représentations simplifiées des liaisons entre composants. Toutefois, les observations montrent que ce modèle géométrique n'est qu'une représentation simplifiée du produit réel. De plus, et grâce à son rôle clé, la maquette numérique est de plus en plus utilisée pour structurer les informations non-géométriques qui sont ensuite utilisées dans diverses étapes du processus de développement de produits. Une demande importante est d'accéder aux informations fonctionnelles à différents niveaux de la représentation géométrique d'un assemblage. Ces informations fonctionnelles s'avèrent essentielles pour préparer des analyses éléments finis. Dans ce travail, nous proposons une méthode automatisée afin d'enrichir le modèle géométrique extrait d'une maquette numérique avec les informations fonctionnelles nécessaires pour la préparation d'un modèle de simulation par éléments finis. Les pratiques industrielles et les représentations géométriques simplifiées sont prises en compte lors de l'interprétation d'un modèle purement géométrique qui constitue le point de départ de la méthode proposée
The digital mock-up (DMU) of a product has taken a central position in the product development process (PDP). It provides the geometric reference of the product assembly, as it defines the shape of each individual component, as well as the way components are put together. However, observations show that this geometric model is no more than a conventional representation of what the real product is. Additionally, and because of its pivotal role, the DMU is more and more required to provide information beyond mere geometry to be used in different stages of the PDP. An increasingly urging demand is functional information at different levels of the geometric representation of the assembly. This information is shown to be essential in phases such as geometric pre-processing for finite element analysis (FEA) purposes. In this work, an automated method is put forward that enriches a geometric model, which is the product DMU, with function information needed for FEA preparations. To this end, the initial geometry is restructured at different levels according to functional annotation needs. Prevailing industrial practices and representation conventions are taken into account in order to functionally interpret the pure geometric model that provides a start point to the proposed method
APA, Harvard, Vancouver, ISO, and other styles
40

Jin, Yi. "Belief Change in Reasoning Agents: Axiomatizations, Semantics and Computations." Doctoral thesis, Technische Universität Dresden, 2006. https://tud.qucosa.de/id/qucosa%3A24983.

Full text
Abstract:
The capability of changing beliefs upon new information in a rational and efficient way is crucial for an intelligent agent. Belief change therefore is one of the central research fields in Artificial Intelligence (AI) for over two decades. In the AI literature, two different kinds of belief change operations have been intensively investigated: belief update, which deal with situations where the new information describes changes of the world; and belief revision, which assumes the world is static. As another important research area in AI, reasoning about actions mainly studies the problem of representing and reasoning about effects of actions. These two research fields are closely related and apply a common underlying principle, that is, an agent should change its beliefs (knowledge) as little as possible whenever an adjustment is necessary. This lays down the possibility of reusing the ideas and results of one field in the other, and vice verse. This thesis aims to develop a general framework and devise computational models that are applicable in reasoning about actions. Firstly, I shall propose a new framework for iterated belief revision by introducing a new postulate to the existing AGM/DP postulates, which provides general criteria for the design of iterated revision operators. Secondly, based on the new framework, a concrete iterated revision operator is devised. The semantic model of the operator gives nice intuitions and helps to show its satisfiability of desirable postulates. I also show that the computational model of the operator is almost optimal in time and space-complexity. In order to deal with the belief change problem in multi-agent systems, I introduce a concept of mutual belief revision which is concerned with information exchange among agents. A concrete mutual revision operator is devised by generalizing the iterated revision operator. Likewise, a semantic model is used to show the intuition and many nice properties of the mutual revision operator, and the complexity of its computational model is formally analyzed. Finally, I present a belief update operator, which takes into account two important problems of reasoning about action, i.e., disjunctive updates and domain constraints. Again, the updated operator is presented with both a semantic model and a computational model.
APA, Harvard, Vancouver, ISO, and other styles
41

Bush, V. J. "Recursion transformations for run-time control of parallel computations." Thesis, University of Manchester, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.382888.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Job, Dominic Edward. "Case-based reasoning and evolutionary computation techniques for FPGA programming." Thesis, Edinburgh Napier University, 2001. http://researchrepository.napier.ac.uk/Output/4272.

Full text
Abstract:
A problem in Software Reuse (SR) is to find a software component appropriate to a given requirement. At present this is done by manual browsing through large libraries which is very time consuming and therefore expensi ve. Further to this, if the component is not the same as, but similar to a requirement, the component must be adapted to meet the requirements. This browsing and adaptation requires a skilled user who can comprehend library entries and foresee their application. It is expensive to train users and to produce these documented libraries. The specialised software design domain, chosen in this thesis, is that of Field Programmable Gate Arrays (FPGAs) programs. FPGAs are user programmable microchips that have many applications including encryption and control. This thesis is concerned with a specific technique for FGPA programming that uses Evolutionary Computing (EC) techniques to synthesize FPGA programs. Evolutionary Computing (EC) techniques are based on natural systems such as the life cycle of living organisms or the formation of crystalline structures. They can generate solutions to problems without the need for complete understanding of the problem. EC has been used to create software programs, and can be used as a knowledge-lean approach for generating libraries of software solutions. EC techniques produce solutions without documentation. To automate SR it has been shown that it is essential to understand the knowledge in the software library. In this thesis techniques for automatically documenting EC produced solutions are illustrated. It is also helpful to understand the principles at work in the reuse process. On examination of large collections of evolved programs it is shown that these programs contain reusable modules. Further to this, it is shown that by studying series of similar software components, principles of scale can be deduced. Case Based Reasoning (CBR) is a problem solving method that reuses old solutions to solve new problems and is an effective method of automatically reusing software libraries. These techniques enable automated creation, documentation and reuse of a software library. This thesis proposes that CBR is a feasible method for the reuse of EC designed FPGA programs. It is shown that EC synthesised FPGA programs can be documented, reused, and adapted to solve new problems, using automated CBR techniques.
APA, Harvard, Vancouver, ISO, and other styles
43

Ulrich, Karl T. "Computation and Pre-Parametric Design." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/6845.

Full text
Abstract:
My work is broadly concerned with the question "How can designs bessynthesized computationally?" The project deals primarily with mechanical devices and focuses on pre-parametric design: design at the level of detail of a blackboard sketch rather than at the level of detail of an engineering drawing. I explore the project ideas in the domain of single-input single-output dynamic systems, like pressure gauges, accelerometers, and pneumatic cylinders. The problem solution consists of two steps: 1) generate a schematic description of the device in terms of idealized functional elements, and then 2) from the schematic description generate a physical description.
APA, Harvard, Vancouver, ISO, and other styles
44

Ballout, Ali. "Apprentissage actif pour la découverte d'axiomes." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4026.

Full text
Abstract:
Cette thèse aborde le défi de l'évaluation des formules logiques candidates, avec un accent particulier sur les axiomes, en combinant de manière synergique l'apprentissage automatique et le raisonnement symbolique. Cette approche innovante facilite la découverte automatique d'axiomes, principalement dans la phase d'évaluation des axiomes candidats générés. La recherche vise à résoudre le problème de la validation efficace et précise de ces candidats dans le contexte plus large de l'acquisition de connaissances sur le Web sémantique.Reconnaissant l'importance des heuristiques de génération existantes pour les axiomes candidats, cette recherche se concentre sur l'avancement de la phase d'évaluation de ces candidats. Notre approche consiste à utiliser ces candidats basés sur des heuristiques, puis à évaluer leur compatibilité et leur cohérence avec les bases de connaissances existantes. Le processus d'évaluation, qui nécessite généralement beaucoup de calculs, est révolutionné par le développement d'un modèle prédictif qui évalue efficacement l'adéquation de ces axiomes en tant que substitut du raisonnement traditionnel. Ce modèle innovant réduit considérablement les exigences en matière de calcul, en utilisant le raisonnement comme un "oracle" occasionnel pour classer les axiomes complexes lorsque cela est nécessaire.L'apprentissage actif joue un rôle essentiel dans ce cadre. Il permet à l'algorithme d'apprentissage automatique de sélectionner des données spécifiques pour l'apprentissage, améliorant ainsi son efficacité et sa précision avec un minimum de données étiquetées. La thèse démontre cette approche dans le contexte du Web sémantique, où le raisonneur joue le rôle d'"oracle" et où les nouveaux axiomes potentiels représentent des données non étiquetées.Cette recherche contribue de manière significative aux domaines du raisonnement automatique, du traitement du langage naturel et au-delà, en ouvrant de nouvelles possibilités dans des domaines tels que la bio-informatique et la preuve automatique de théorèmes. En mariant efficacement l'apprentissage automatique et le raisonnement symbolique, ces travaux ouvrent la voie à des processus de découverte de connaissances plus sophistiqués et autonomes, annonçant un changement de paradigme dans la manière dont nous abordons et exploitons la vaste étendue de données sur le web sémantique
This thesis addresses the challenge of evaluating candidate logical formulas, with a specific focus on axioms, by synergistically combining machine learning with symbolic reasoning. This innovative approach facilitates the automatic discovery of axioms, primarily in the evaluation phase of generated candidate axioms. The research aims to solve the issue of efficiently and accurately validating these candidates in the broader context of knowledge acquisition on the semantic Web.Recognizing the importance of existing generation heuristics for candidate axioms, this research focuses on advancing the evaluation phase of these candidates. Our approach involves utilizing these heuristic-based candidates and then evaluating their compatibility and consistency with existing knowledge bases. The evaluation process, which is typically computationally intensive, is revolutionized by developing a predictive model that effectively assesses the suitability of these axioms as a surrogate for traditional reasoning. This innovative model significantly reduces computational demands, employing reasoning as an occasional "oracle" to classify complex axioms where necessary.Active learning plays a pivotal role in this framework. It allows the machine learning algorithm to select specific data for learning, thereby improving its efficiency and accuracy with minimal labeled data. The thesis demonstrates this approach in the context of the semantic Web, where the reasoner acts as the "oracle," and the potential new axioms represent unlabeled data.This research contributes significantly to the fields of automated reasoning, natural language processing, and beyond, opening up new possibilities in areas like bioinformatics and automated theorem proving. By effectively marrying machine learning with symbolic reasoning, this work paves the way for more sophisticated and autonomous knowledge discovery processes, heralding a paradigm shift in how we approach and leverage the vast expanse of data on the semantic Web
APA, Harvard, Vancouver, ISO, and other styles
45

Schwartz, Hansen A. "The acquisition of lexical knowledge from the web for aspects of semantic interpretation." Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5028.

Full text
Abstract:
Applications to word sense disambiguation, an aspect of semantic interpretation, are used to evaluate the contributions. Disambiguation systems which utilize semantically annotated training data are considered supervised. The algorithms of this dissertation are considered minimally-supervised; they do not require training data created by humans, though they may use human-created data sources. In the case of evaluating a database of common sense knowledge, integrating the knowledge into an existing minimally-supervised disambiguation system significantly improved results -- a 20.5\% error reduction. Similarly, the Web selectors disambiguation system, which acquires knowledge directly as part of the algorithm, achieved results comparable with top minimally-supervised systems, an F-score of 80.2\% on a standard noun disambiguation task. This work enables the study of many subsequent related tasks for improving semantic interpretation and its application to real-world technologies. Other aspects of semantic interpretation, such as semantic role labeling could utilize the same methods presented here for word sense disambiguation. As the Web continues to grow, the capabilities of the systems in this dissertation are expected to increase. Although the Web selectors system achieves great results, a study in this dissertation shows likely improvements from acquiring more data. Furthermore, the methods for acquiring a database of common sense knowledge could be applied in a more exhaustive fashion for other types of common sense knowledge. Finally, perhaps the greatest benefits from this work will come from the enabling of real world technologies that utilize semantic interpretation.; This work investigates the effective acquisition of lexical knowledge from the Web to perform semantic interpretation. The Web provides an unprecedented amount of natural language from which to gain knowledge useful for semantic interpretation. The knowledge acquired is described as common sense knowledge, information one uses in his or her daily life to understand language and perception. Novel approaches are presented for both the acquisition of this knowledge and use of the knowledge in semantic interpretation algorithms. The goal is to increase accuracy over other automatic semantic interpretation systems, and in turn enable stronger real world applications such as machine translation, advanced Web search, sentiment analysis, and question answering. The major contributions of this dissertation consist of two methods of acquiring lexical knowledge from the Web, namely a database of common sense knowledge and Web selectors. The first method is a framework for acquiring a database of concept relationships. To acquire this knowledge, relationships between nouns are found on the Web and analyzed over WordNet using information-theory, producing information about concepts rather than ambiguous words. For the second contribution, words called Web selectors are retrieved which take the place of an instance of a target word in its local context. The selectors serve for the system to learn the types of concepts that the sense of a target word should be similar. Web selectors are acquired dynamically as part of a semantic interpretation algorithm, while the relationships in the database are useful to stand-alone programs. A final contribution of this dissertation concerns a novel semantic similarity measure and an evaluation of similarity and relatedness measures on tasks of concept similarity. Such tasks are useful when applying acquired knowledge to semantic interpretation.
ID: 029808979; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 141-160).
Ph.D.
Doctorate
Electrical Engineering and Computer Science
Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
46

Mroszczyk, Przemyslaw. "Computation with continuous mode CMOS circuits in image processing and probabilistic reasoning." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/computation-with-continuous-mode-cmos-circuits-in-image-processing-and-probabilistic-reasoning(57ae58b7-a08c-4a67-ab10-5c3a3cf70c09).html.

Full text
Abstract:
The objective of the research presented in this thesis is to investigate alternative ways of information processing employing asynchronous, data driven, and analogue computation in massively parallel cellular processor arrays, with applications in machine vision and artificial intelligence. The use of cellular processor architectures, with only local neighbourhood connectivity, is considered in VLSI realisations of the trigger-wave propagation in binary image processing, and in Bayesian inference. Design issues, critical in terms of the computational precision and system performance, are extensively analysed, accounting for the non-ideal operation of MOS devices caused by the second order effects, noise and parameter mismatch. In particular, CMOS hardware solutions for two specific tasks: binary image skeletonization and sum-product algorithm for belief propagation in factor graphs, are considered, targeting efficient design in terms of the processing speed, power, area, and computational precision. The major contributions of this research are in the area of continuous-time and discrete-time CMOS circuit design, with applications in moderate precision analogue and asynchronous computation, accounting for parameter variability. Various analogue and digital circuit realisations, operating in the continuous-time and discrete-time domains, are analysed in theory and verified using combined Matlab-Hspice simulations, providing a versatile framework suitable for custom specific analyses, verification and optimisation of the designed systems. Novel solutions, exhibiting reduced impact of parameter variability on the circuit operation, are presented and applied in the designs of the arithmetic circuits for matrix-vector operations and in the data driven asynchronous processor arrays for binary image processing. Several mismatch optimisation techniques are demonstrated, based on the use of switched-current approach in the design of current-mode Gilbert multiplier circuit, novel biasing scheme in the design of tunable delay gates, and averaging technique applied to the analogue continuous-time circuits realisations of Bayesian networks. The most promising circuit solutions were implemented on the PPATC test chip, fabricated in a standard 90 nm CMOS process, and verified in experiments.
APA, Harvard, Vancouver, ISO, and other styles
47

Morais, Anuar Daian de. "O desenvolvimento do raciocínio condicional a partir do uso de teste no squeak etoys." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/164383.

Full text
Abstract:
A presente tese apresenta uma investigação acerca do desenvolvimento do raciocínio condicional, considerado um componente chave do pensamento lógico-dedutivo, em crianças e adolescentes que participaram de uma experiência de programação com o software Squeak Etoys. O desenvolvimento do raciocínio condicional é classificado em etapas relacionadas à composição e reversão de transformações que operam sobre a implicação, culminando com a plena reversibilidade que corresponde, na teoria piagetiana, à construção e mobilização do grupo de transformações INRC (Identidade, Negação, Recíproca, Correlativa). Tais etapas são identificadas a partir de entrevistas realizadas segundo o método clínico piagetiano, através da aplicação de três desafios de programação com complexidade crescente, cuja solução envolvia o uso da operação lógica da implicação. As entrevistas foram realizadas com oito crianças, com idades entre 10 e 16 anos, que cursavam as séries finais do Ensino Fundamental de duas escolas públicas. Com base nos dados, a análise revela a importância do pensamento combinatório, que permite aos adolescentes testarem, sistematicamente, todas as possibilidades de ordenamento e inclusão dos comandos sugeridos, e a obterem as conclusões lógicas adequadas, enquanto que as crianças mais novas não obtém o mesmo êxito. Além disso, na tese é realizada uma discussão sobre a inclusão da escola numa cultura digital sob uma perspectiva construtivista de construção do conhecimento. Nesse contexto, a metodologia de projetos de aprendizagem foi apresentada como sendo adequada e o software Squeak Etoys despontou como uma possibilidade interessante de se desenvolver projetos e de promover a aprendizagem de matemática. Por último, neste trabalho também é realizado um debate sobre a importância de se aprender a programar na escola.
The present thesis presents an investigation into the development of conditional reasoning, considered a key component of logical-deductive thinking, in children and adolescents who participated in a programming experience with the software Squeak Etoys. The development of conditional reasoning is classified into stages related to the composition and reversal of transformations that operate on the implication, culminating in the full reversibility that corresponds, in Piaget’s theory, to the construction and mobilization of the Transformations INRC (Identity, Negation, Reciprocity and Correlation). These steps are identified from interviews conducted according to Piaget’s clinical method, through the application of three programming challenges with increasing complexity, whose solution involved the use of the logical operation of the implication. The interviews were conducted with eight children aged 10-16, who attended the final series of the Elementary School of two public schools. Based on the data, the analysis revealed the importance of combining thinking, which allows teenagers to systematically test all the possibilities for ordering and inclusion of the suggested commands, and to obtain the appropriate logical conclusions, while younger children do not achieve the same results. Moreover, in the thesis a discussion is conducted on the inclusion of the school in a digital culture under a constructivist perspective of building knowledge. In this context, the methodology of learning through projects has been presented as being appropriate and the Squeak Etoys software has appeared as an interesting possibility of developing projects and promoting the learning of mathematics. Finally, in this study a debate is also conducted on the importance of learning to plan in the school.
APA, Harvard, Vancouver, ISO, and other styles
48

Santamaria, Daniele Francesco. "Automated Reasoning via a Multi-sorted Fragment of Computable Set Theory with Applications to Semantic Web." Doctoral thesis, Università di Catania, 2019. http://hdl.handle.net/10761/4137.

Full text
Abstract:
Semantic Web is a vision of the web in which machine-readable data enables software agents to manipulate and query information on behalf of human agents. To achieve such goal, machines are provided with appropriate languages and tools. Investigating new technologies which can extend the power of knowledge representation and reasoning systems is the main task of my research work started by observing the lack of some desirable characteristics concerning the expressiveness of semantic web languages and their integration with some features of rule-based languages, arisen with the study of some application domain during the draft of my bachelor thesis. Specifically in this dissertation, I consider some results of \emph{Computable Set Theory}, a research field rich of several interesting theoretical results, in particular for what concerns multi-sorted and multi-level syllogistic fragments, in order to provide a novel powerful knowledge representation and reasoning framework particularly devoted to the context of Semantic Web. For this purpose, I use a syllogistic fragment of computable set theory called $\flqsr$, admitting variables of four sorts and a restricted form of quantification over variables of the first three sorts, to represent and reason on expressive decidable \emph{description logics} (DLs) which can be used to represent ontological knowledge via Semantic Web technologies. I show that the fragment $\flqsr$ allows one to represent an expressive DL called $\dlss$ ($\shdlss$ for short) and its extension $\dlssx$ ($\shdlssx$ for short). The DL $\shdlssx$ admits, among other features, Boolean operations on concepts and roles, data types, and product of concepts. Moreover, $\flqsr$ provides a unique formalism which combines the features of the DL $\shdlssx$ with a rule language admitting full negation and subject to no safety condition. I show that $\flqsr$ can be used to represent a novel Web Ontology Language (OWL) 2 profile, and hence can be used as reasoning framework for a large family of ontologies. Then, I study the most widespread reasoning tasks concerning both $\shdlssx$-TBoxes and $\shdlssx$-ABoxes. In particular, I consider the \emph{Consistency Problem} of a $\shdlssx$-knowledge base (KB), the \emph{Conjunctive Query Answering} (CQA) for $\shdlssx$, which provides simple mechanism allowing users and applications to interact with data stored in the KB, and the \emph{Higher-Order Conjunctive Query Answering} (HOCQA) for $\shdlssx$, a generalization of the CQA problem admitting three sorts of variables, namely, individuals, concepts, and roles. The decidability of the above mentioned problems are proved by resorting them to analogous problems in the context of the set-theoretic language $\flqsr$ by means of a suitable mapping function from axioms and assertions of $\shdlssx$ to formulae of $\flqsr$. Then, I provide a correct and terminating algorithm for the HOCQA problem for $\shdlssx$ based on the \ke\space system, a refutation system inspired to the Smullyan's semantic tableaux, giving also computational complexity results. The algorithm also serves for the consistency problem of $\shdlssx$-KBs and for other reasoning tasks which the HOCQA problem can be instantiated to. I also introduce an implementation in C++ of the algorithm thus to provide an effective reasoner for the DL $\shdlssx$ which admits ontology serialized in the OWL/XML format.
APA, Harvard, Vancouver, ISO, and other styles
49

Gunes, Baydin Atilim. "Evolut ionary adap tat ion in cas e - bas ed reasoning. An application to cross-domain analogies for mediation." Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/129294.

Full text
Abstract:
L'analogia juga un papel fonamental en la resolucio de problemes i es troba darrere de molts dels processos centrals de la capacitat cognitiva humana, fins al punt que s'ha considerat "el nucli del coneixement". El raonament analogic funciona a traves del proces de la transferencia, l'us del coneixement apres en una situacio en l'altra per a la qual no va ser destinat. El paradigma de raonament basat en casos (case-based reasoning, CBR) presenta un model molt relacionat, pero lleugerament diferent de raonament utilitzat principalment en la intel.ligencia artificial; diferent en part perque el raonament analogic se centra habitualment en la similitud estructural entre-dominis mentre que CBR te a veure amb la transferencia de solucions entre els casos semanticament similars dins d'un domini especific. En aquesta tesi, ens unim a aquests enfocaments interrelacionats de la ciència cognitiva, la psicologia i la intel.ligencia artificial, en un sistema CBR, on la recuperacio i l'adaptacio es duen a terme per l'Motor d'Associacio Estructural (SME) i son recolzats per el raonament de sentit comu integrant la informacio des de diverses bases de coneixement. Per permetre aixo, utilitzem una estructura de representacio de casos que es basa en les xarxes semantiques. Aixo ens dona un model CBR capac de recuperar i adaptar solucions de dominis que son aparentment diferents pero estructuralment molt similars, formant una de les nostres contribucions en aquest estudi. Una de les principals limitacions de la investigacio sobre els sistemes CBR sempre ha estat l'adaptacio, on la majoria de les aplicacions es van conformar amb una simple "reutilitzacio" de la solucio del cas recuperat, principalment mitjancant una adaptacio null} o adaptacio sustitucional. La dificultat de l'adaptacio es encara mes evident per al nostre cas d'inter-domini CBR utilitzant xarxes semantiques. Resoldre aquesta dificultat aplana el cami per a una contribucio igualment important d'aquesta tesi, on s'introdueix una tecnica nova d'adaptacio generativa basada en la computacio evolutiva que permet la creacio o modificacio espontania de les xarxes semantiques d'acord a les necessitats d'adaptacio CBR. Per a l'avaluacio d'aquest treball, apliquem el nostre sistema CBR al problema de la mediacio, un metode important en la resolucio de conflictes. El problema de la mediacio no es trivial i representa un molt bon exemple del mon real, en el qual podem detectar problemes estructuralment similars de dominis aparentment tan lluny com les relacions internacionals, conflictes familiars i els drets intel.lectuals.
Analogy plays a fundamental role in problem solving and it lies behind many processes central to human cognitive capacity, to the point that it has been considered "the core of cognition". Analogical reasoning functions through the process of transfer, the use of knowledge learned in one situation in another for which it was not targeted. The case-based reasoning (CBR) paradigm presents a highly related, but slightly different model of reasoning mainly used in artificial intelligence, different in part because analogical reasoning commonly focuses on cross-domain structural similarity whereas CBR is concerned with transfer of solutions between semantically similar cases within one specific domain. In this dissertation, we join these interrelated approaches from cognitive science, psychology, and artificial intelligence, in a CBR system where case retrieval and adaptation are accomplished by the Structure Mapping Engine (SME) and are supported by commonsense reasoning integrating information from several knowledge bases. For enabling this, we use a case representation structure that is based on semantic networks. This gives us a CBR model capable of recalling and adapting solutions from seemingly different, but structurally very similar domains, forming one of our contributions in this study. A traditional weakness of research on CBR systems has always been about adaptation, where most applications settle for a very simple "reuse" of the solution from the retrieved case, mostly through null adaptation or substitutional adaptation. The difficulty of adaptation is even more obvious for our case of cross-domain CBR using semantic networks. Solving this difficulty paves the way to another contribution of this dissertation, where we introduce a novel generative adaptation technique based on evolutionary computation that enables the spontaneous creation or modification of semantic networks according to the needs of CBR adaptation. For the evaluation of this work, we apply our CBR system to the problem of mediation, an important method in conflict resolution. The mediation problem is non-trivial and presents a very good real world example where we can spot structurally similar problems from domains seemingly as far as international relations, family disputes, and intellectual rights.
APA, Harvard, Vancouver, ISO, and other styles
50

Mercier, Chloé. "Modéliser les processus cognitifs dans une tâche de résolution créative de problème : des approches symboliques à neuro-symboliques en sciences computationnelles de l'éducation." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0065.

Full text
Abstract:
L’intégration de compétences transversales telles que la créativité, la résolution de problèmes et la pensée informatique, dans les programmes d’enseignement primaire et secondaire, est un défi majeur dans le domaine de l’éducation aujourd’hui. Nous postulons que l’enseignement et l’évaluation de ces compétences transversales pourraient bénéficier d’une meilleure compréhension des comportements des apprenants dans des activités spécifiques qui requièrent ces compétences. A cette fin, les sciences computationnelles de l’apprentissage (computational learning sciences) sont un champ en émergence qui requiert l’étroite collaboration des neurosciences computationnelles et des sciences de l’éducation pour permettre l’évaluation des processus d’apprentissage. Nous nous concentrons sur une tâche de résolution créative de problème dans laquelle le sujet est amené à construire un “véhicule” en combinant des cubes robotiques modulaires. Dans le cadre d’une action de recherche exploratoire, nous proposons plusieurs approches s’appuyant sur des formalismes symboliques à neuro-symboliques, afin de spécifier une telle tâche et de modéliser les comportements et processus cognitifs sousjacents d’un sujet engagé dans cette tâche. Bien qu’étant à un stade très préliminaire, une telle formalisation semble prometteuse pour mieux comprendre les mécanismes complexes impliqués dans la résolution créative de problèmes à plusieurs niveaux : (i) la spécification du problème et les observables d’intérêt à collecter pendant la tâche ; (ii) la représentation cognitive de l’espace-problème, en fonction des connaissances préalables et de la découverte des affordances, permettant de générer des trajectoires-solutions créatives ; (iii) une implémentation du raisonnement par inférence au sein d’un substrat neuronal
Integrating transversal skills such as creativity, problem solving and computational thinking, into the primary and secondary curricula is a key challenge in today’s educational field. We postulate that teaching and assessing transversal competencies could benefit from a better understanding of the learners’ behaviors in specific activities that require these competencies. To this end, computational learning science is an emerging field that requires the close collaboration of computational neuroscience and educational sciences to enable the assessment of learning processes. We focus on a creative problem-solving task in which the subject is engaged into building a “vehicle” by combining modular robotic cubes. As part of an exploratory research action, we propose several approaches based on symbolic to neuro-symbolic formalisms, in order to specify such a task and model the behavior and underlying cognitive processes of a subject engaged in this task. Despite being at a very preliminary stage, such a formalization seems promising to better understand complex mechanisms involved in creative problem solving at several levels: (i) the specification of the problem and the observables of interest to collect during the task; (ii) the cognitive representation of the problem space, depending on prior knowledge and affordance discovery, allowing to generate creative solution trajectories; (iii) an implementation of reasoning mechanisms within a neuronal substrate
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography