Rozprawy doktorskie na temat „Knowledge representation (Information theory)”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Knowledge representation (Information theory).

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Knowledge representation (Information theory)”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Smith, Julian P. "Neural networks, information theory and knowledge representation". Thesis, University of Edinburgh, 1996. http://hdl.handle.net/1842/20801.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Plate, Tony A. "Holographic reduced representation : distributed representation for cognitive structures /". Stanford, Calif. : CSLI, 2003. http://www.loc.gov/catdir/toc/uchi051/2003043513.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Khor, Sebastian W. "A fuzzy knowledge map framework for knowledge representation /". Access via Murdoch University Digital Theses Project, 2006. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20070822.32701.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Ding, Yingjia. "Knowledge retention with genetic algorithms by multiple levels of representation". Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-12052009-020026/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Barb, Adrian S. "Knowledge representation and exchange of visual patterns using semantic abstractions". Diss., Columbia, Mo. : University of Missouri-Columbia, 2008. http://hdl.handle.net/10355/6674.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of Missouri-Columbia, 2008.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on July 21, 2009) Includes bibliographical references.
Style APA, Harvard, Vancouver, ISO itp.
6

Babaian, Tamara. "Knowledge representation and open world planning using [Greek letter Psi]-forms /". Thesis, Connect to Dissertations & Theses @ Tufts University, 2000.

Znajdź pełny tekst źródła
Streszczenie:
Thesis (Ph.D )--Tufts University, 2000.
Adviser: James G. Schmolze. Submitted to the Dept. of Computer Science. Includes bibliographical references (leaves 148-156). Access restricted to members of the Tufts University community. Also available via the World Wide Web;
Style APA, Harvard, Vancouver, ISO itp.
7

Nowak, Krzysztof Zbigniew. "Conceptual reasoning : belief, multiple agents and preference /". Title page, table of contents and abstract only, 1998. http://web4.library.adelaide.edu.au/theses/09PH/09phn946.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Salgado-Arteaga, Francisco. "A study on object-oriented knowledge representation". Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/935944.

Pełny tekst źródła
Streszczenie:
This thesis is a study on object-oriented knowledge representation. The study defines the main concepts of the object model. It also shows pragmatically the use of object-oriented methodology in the development of a concrete software system designed as the solution to a specific problem.The problem is to simulate the interaction between several animals and various other objects that exist in a room. The proposed solution is an artificial intelligence (Al) program designed according to the object-oriented model, which closely simulates objects in the problem domain. The AI program is conceived as an inference engine that maps together a given knowledge base with a database. The solution is based conceptually on the five major elements of the model, namely abstraction, encapsulation, modularity, hierarchy, and polymorphism.The study introduces a notation of class diagrams and frames to capture the essential characteristics of the system defined by analysis and design. The solution to the problem allows the application of any object-oriented programming language. Common Lisp Object System (CLOS) is the language used for the implementation of the software system included in the appendix.
Department of Computer Science
Style APA, Harvard, Vancouver, ISO itp.
9

Pivkina, Inna Valentinovna. "REVISION PROGRAMMING: A KNOWLEDGE REPRESENTATION FORMALISM". Lexington, Ky. : [University of Kentucky Libraries], 2001. http://lib.uky.edu/ETD/ukycosc2001d00022/pivkina.pdf.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of Kentucky, 2001.
Title from document title page. Document formatted into pages; contains vii, 121 p. : ill. Includes abstract. Includes bibliographical references (p. 116-119).
Style APA, Harvard, Vancouver, ISO itp.
10

Austin, Lydia B. (Lydia Bronwen). "Individual differences in knowledge representation and problem- solving performance in physics". Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=41100.

Pełny tekst źródła
Streszczenie:
Concept mapping in college-level physics was investigated. The study was carried out in three parts. First, an attempt was made to validate concept mapping as a method of evaluating student learning at the junior college level (ages 16-21). Several measures were found to be sensitive to differences in students' achievement. Second, the effectiveness of concept mapping as an instructional strategy was investigated. It was found that the strategy led to improvement in multistep problem-solving performance but not in performance on single step problems. Third, the concept maps made by experts in the field were compared with the maps made by high achieving and average achieving students to see if this is yet another way in which high performance and expertise are related. It was found that the high achieving students made maps which more nearly resembled the maps made by experts than those made by average achieving students.
Style APA, Harvard, Vancouver, ISO itp.
11

Nuopponen, Anita. "Begreppssystem för terminologisk analys". Vasa : Universitas Wasaensis, 1994. http://catalog.hathitrust.org/api/volumes/oclc/32858045.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Li, Vincent. "Knowledge representation and problem solving for an intelligent tutoring system". Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29657.

Pełny tekst źródła
Streszczenie:
As part of an effort to develop an intelligent tutoring system, a set of knowledge representation frameworks was proposed to represent expert domain knowledge. A general representation of time points and temporal relations was developed to facilitate temporal concept deductions as well as facilitating explanation capabilities vital in an intelligent advisor system. Conventional representations of time use a single-referenced timeline and assigns a single unique value to the time of occurrence of an event. They fail to capture the notion of events, such as changes in signal states in microcomputer systems, which do not occur at precise points in time, but rather over a range of time with some probability distribution. Time is, fundamentally, a relative quantity. In conventional representations, this relative relation is implicitly defined with a fixed reference, "time-zero", on the timeline. This definition is insufficient if an explanation of the temporal relations is to be constructed. The proposed representation of time solves these two problems by representing a time point as a time-range and making the reference point explicit. An architecture of the system was also proposed to provide a means of integrating various modules as the system evolves, as well as a modular development approach. A production rule EXPERT based on the rule framework used in the Graphic Interactive LISP tutor (GIL) [44, 45], an intelligent tutor for LISP programming, was implemented to demonstrate the inference process using this time point representation. The EXPERT is goal-driven and is intended to be an integral part of a complete intelligent tutoring system.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
13

Bjarnadottir, Bjorg. "Phases of knowledge in lexical acquisition : a developmental study into four to twelve year olds decipherment of unfamiliar words from linguistic contexts during continuous assessment". Thesis, University of Stirling, 1996. http://hdl.handle.net/1893/2608.

Pełny tekst źródła
Streszczenie:
Research on the deciphering of nonsense words within the context of text, a story, or tale was conducted at various schools and day-care centres in the Stirling area of Scotland in 1985-1988. Three experiments were conducted, in which large samples of primary school children aged 4-12 were tested. The experiments resembled Werner and Kaplan's (1950) "Word-Context Task, " in which isolated sentences in a series with one nonsense word in each sentence were presented to school children. The children were asked to answer questions about the meanings of these words. The results were not in line with the rapid word learning that experience suggests happens in young children, it was not until after age 9 that the children started to give approximately correct answers, and prior to age 11 the answers did not meet up with proper adult definitions. It has been pointed out, however (Donaldson, 1978), that because these sentences were not supported by any relation to immediate context and behaviour, and because the children were required to process utterances as pure isolated language - an unnatural situation for language acquisition - the "Word-Context Task" may have given an unrealistic picture of the child's ability to acquire language naturally. In the three word-leaming studies at Stirling University in 1985-1988, in order to account for a more natural presentation, the sentences with the nonsense word were embodied in the context of a story. Children were thought to fare better (than the children in the Wemer & Kaplan study) when listening to such a story, especially if the basic theme was of interest. A methodological tool, refined in the work of Dockrell (1981), in which the full meaning of a term involves having worked out the sense, reference, and denotation of the term. was applied in each of the test batteries that followed the presentation of the story. In these tests, the children were tested on both their comprehension and production of the new term in question. Drawings were used in order to try to tap the children's denotation of the new term, and to facilitate young children's approach to the demands of the study. As regards word meaning in general. Martin Joos (1972) had argued that the common blunder was that an odd word must have an odd sense--the odder, the better. He argued that one should define words in such a fashion as to make them contribute least to the total message derivable from its passage where it is housed, rather than, e. g., defining it according to some presumed etymology of semantic history. He called this concept "a tacit principle", and argued that word learners and word users would sense the intuitive familiarity of the conveyed meaning of words and text. Words are, according to this principle, "mysterious" in their environment, their meanings are not worked out deliberately, intentionally; rather, one should make the mysterious item maximally supportive and supported in its situation, in order that redundancy would result in proper connotation of the distributed meaning. Context and knowledge of contexts reveal meaning; the text is processed holistically, and so are the instantaneous meanings of the words of which it is composed. Thus, Joos maintained that in deciphering an unknown word, the wisest course is to assume the "least meaning" consistent with the context. Tasks such as Werner and Kaplan's "Word-Context Task" (1950), force subjects to infer aspects of meaning that go well beyond this "least" meaning and, as Joos pointed out, this leads notably to errors from which recovery is difficult. In the studies at Stirling University, attempts were made to determine if different types of learning would result in different types of responses. The dichotomy, intentional/incidental or analytic/holistic was worked out into experimental and control conditions, as based on Aveling's pioneering experiment (1911, 1912) into the general and particular aspects of encoded stimuli. Later, Lee Brooks (1978) worked with the dichotomies intentional/incidental in his Lepton experiments and argued that the more complex a behaviour is (speaking or writing, for example), the more likely it is to be learned implicitly. He pointed out, however, that the dichotomies explicit/implicit, analytic/non analytic, and deliberate vs. intuitive processes need to be elaborated and not taken as a strict division. In the three experiments at Stirling, children of primary school age (ages 4 to 12) were presented with a "word-context" task and their understanding of the unknown word was probed under different conditions. In the control condition a control word was probed, but in the experimental condition the child's understanding of the target word was fully tested. All the children listened to a short story displayed by a video or read from a tape in which the unknown word occurred in several different contexts, the unknown word in each story denoted an unfamiliar natural kind. During the story's display, children in the control condition were, at certain intervals, asked questions about the story's theme. Children in the experimental group were, at these same intervals, shown a sample of objects, to one of which the unknown word referred, and they were asked to hand these objects to the experimenter as she requested the objects, or they were asked direct questions about the meaning of the target word and about other words in the story. After hearing the story, all subjects were tested on their comprehension and production of the unknown word, together with other words, and a scoring procedure based on a technique developed by Dockrell (1981) was applied. This procedure necessitated the full meaning of the term covering aspects of the sense, reference., and denotation of the new term (cf. Lyons, 1977a). The results indicate that children younger than those tested in the Wemer and Kaplan's "Word-Context Task" (ages 8.6 to 13.6) could decipher the full meaning of the new term. But individual differences within age groups showed greater differences than existed between age groups. All in all, the results indicate that working out the full meaning of a new term is a lengthy process indeed (Campbell & Dockrell, 1986), even though a sense of the given semantic domain may often be established quite early in the learning process. Performance styles also differ from younger children to older ones. The results indicate that there were significant age differences between the children in the first and second experiments, but that such differences were lacking in the third experiment, and that control subjects in the three studies seldom gave poorer responses than did experimental subjects and often did better. However, the results must be interpreted in the light of learning and recovery from error occurring, within the experimental subjects in the course of deciphering. If the initial scores of the experimental subjects on the target word as obtained during encoding are compared with the first scores obtained from the control subjects after they had heard the whole story, there is a significant difference in scores between the conditions in favour of the control subjects in all age groups. This is consistent with Joos's assumption that an interference concerning the meaning of a word that occurs too early in the learning task and not enough information of contextual cues will lead the children in the experimental groups astray in their guesses when asked too early for answers on the new word's meaning. But implied in Joos's Axiom is the likelihood for recoveries from errors, and the strategies children use in order to work them out need to be explored further. Much individual variation was found among the children's responses in the age groups. These differences were indeed more significant than were the differences between age groups.
Style APA, Harvard, Vancouver, ISO itp.
14

Thivierge, Jean-Philippe. "Knowledge selection, mapping and transfer in artificial neural networks". Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=111824.

Pełny tekst źródła
Streszczenie:
Knowledge-based Cascade-correlation is a neural network algorithm that combines inductive learning and knowledge transfer (Shultz & Rivest, 2001). In the present thesis, this algorithm is tested on several real-world and artificial problems, and extended in several ways. The first extension consists in the incorporation of the Knowledge-based Artificial Neural Network (KBANN; Shavlik, 1994) technique for generating rule-based (RBCC) networks. The second extension consists of the adaptation of the Optimal Brain Damage (OBD; LeCun, Denker, & Solla, 1990) pruning technique to remove superfluous connection weights. Finally, the third extension consists in a new objective function based on information theory for controlling the distribution of knowledge attributed to subnetworks. A simulation of lexical ambiguity resolution is proposed. In this study, the use of RBCC networks is motivated from a cognitive and neurophysiological perspective.
Style APA, Harvard, Vancouver, ISO itp.
15

Moorman, Kenneth Matthew. "A functional theory of creative reading : process, knowledge, and evaluation". Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/9122.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Kong, Choi-yu. "Effective partial ontology mapping in a pervasive computing environment". Click to view the E-thesis via HKUTO, 2004. http://sunzi.lib.hku.hk/hkuto/record/B32002737.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Kong, Choi-yu, i 江采如. "Effective partial ontology mapping in a pervasive computing environment". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B32002737.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Assefa, Shimelis G. "Human concept cognition and semantic relations in the unified medical language system: A coherence analysis". Thesis, University of North Texas, 2007. https://digital.library.unt.edu/ark:/67531/metadc4008/.

Pełny tekst źródła
Streszczenie:
There is almost a universal agreement among scholars in information retrieval (IR) research that knowledge representation needs improvement. As core component of an IR system, improvement of the knowledge representation system has so far involved manipulation of this component based on principles such as vector space, probabilistic approach, inference network, and language modeling, yet the required improvement is still far from fruition. One promising approach that is highly touted to offer a potential solution exists in the cognitive paradigm, where knowledge representation practice should involve, or start from, modeling the human conceptual system. This study based on two related cognitive theories: the theory-based approach to concept representation and the psychological theory of semantic relations, ventured to explore the connection between the human conceptual model and the knowledge representation model (represented by samples of concepts and relations from the unified medical language system, UMLS). Guided by these cognitive theories and based on related and appropriate data-analytic tools, such as nonmetric multidimensional scaling, hierarchical clustering, and content analysis, this study aimed to conduct an exploratory investigation to answer four related questions. Divided into two groups, a total of 89 research participants took part in two sets of cognitive tasks. The first group (49 participants) sorted 60 food names into categories followed by simultaneous description of the derived categories to explain the rationale for category judgment. The second group (40 participants) performed sorting 47 semantic relations (the nonhierarchical associative types) into 5 categories known a priori. Three datasets resulted as a result of the cognitive tasks: food-sorting data, relation-sorting data, and free and unstructured text of category descriptions. Using the data analytic tools mentioned, data analysis was carried out and important results and findings were obtained that offer plausible explanations to the 4 research questions. Major results include the following: (a) through discriminant analysis category members were predicted consistently in 70% of the time; (b) the categorization bases are largely simplified rules, naïve explanations, and feature-based; (c) individuals theoretical explanation remains valid and stays stable across category members; (d) the human conceptual model can be fairly reconstructed in a low-dimensional space where 93% of the variance in the dimensional space is accounted for by the subjects performance; (e) participants consistently classify 29 of the 47 semantic relations; and, (f) individuals perform better in the functional and spatial dimensions of the semantic relations classification task and perform poorly in the conceptual dimension.
Style APA, Harvard, Vancouver, ISO itp.
19

Duminy, Willem H. "A learning framework for zero-knowledge game playing agents". Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-10172007-153836.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Dulipovici, Alina Maria. "Exploring IT-Based Knowledge Sharing Practices: Representing Knowledge within and across Projects". Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/cis_diss/33.

Pełny tekst źródła
Streszczenie:
EXPLORING IT-BASED KNOWLEDGE SHARING PRACTICES: REPRESENTING KNOWLEDGE WITHIN AND ACROSS PROJECTS Drawing on the social representation literature combined with a need to better understand knowledge sharing across projects, this research lays the ground for the development of a theoretical account seeking to explain the relationship between project members’ representations of knowledge sharing practices and the use of knowledge-based systems as boundary objects or shared systems. The concept of social representations is particularly appropriate for studying social issues in continuous evolution such as the adoption of a new information system. The research design is structured as an interpretive case study, focusing on the knowledge sharing practices within and across four project groups. The findings showed significant divergence among the groups’ social representations. Sharing knowledge across projects was rather challenging, despite the potential advantages provided by the knowledge-based system. Therefore, technological change does not automatically trigger the intended changes in work practices and routines. The groups’ social representations need to be aligned with the desired behaviour or patterns of actions.
Style APA, Harvard, Vancouver, ISO itp.
21

Madhavan, Jayant. "Using known schemas and mappings to construct new semantic mappings /". Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/6852.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Assefa, Shimelis G. O'Connor Brian C. "Human concept cognition and semantic relations in the unified medical language system a coherence analysis /". [Denton, Tex.] : University of North Texas, 2007. http://digital.library.unt.edu/permalink/meta-dc-4008.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Wang, Yufei. "Ontology engineering the brain gene ontology case study : submitted by Yufei Wang ... in partial fulfillment of the requirements for the degree of Master of Computer and Information Sciences, Auckland University of Technology, March 2007". Click here access this resource online, 2007. http://aut.researchgateway.ac.nz/handle/10292/104.

Pełny tekst źródła
Streszczenie:
Thesis (MCIS - Computer and Information Sciences) --AUT University, 2007.
Includes bibliographical references. Also held in print (ix, 74 leaves : ill. ; 30 cm.) in City Campus Theses Collection (T 006.33 WAN)
Style APA, Harvard, Vancouver, ISO itp.
24

Wood, A. R. "Acoustic-phonetic reasoning in computer understanding of speech using frame-based expert knowledge to interpret the 'Speech Sketch' a representation of the acoustic parametric behaviou". Thesis, Staffordshire University, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.356461.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

East, Deborah Jeanine. "Datalog with constraints a new answer-set programming formalism /". Lexington, Ky. : [University of Kentucky Libraries], 2001. http://lib.uky.edu/ETD/ukycosc2001d00017/deast-06-01.pdf.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of Kentucky, 2001.
Title from document title page. Document formatted into pages; contains vii, 75 p. : ill. Includes abstract. Includes bibliographical references (p. 70-72).
Style APA, Harvard, Vancouver, ISO itp.
26

Almassian, Amin. "Information Representation and Computation of Spike Trains in Reservoir Computing Systems with Spiking Neurons and Analog Neurons". PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/2724.

Pełny tekst źródła
Streszczenie:
Real-time processing of space-and-time-variant signals is imperative for perception and real-world problem-solving. In the brain, spatio-temporal stimuli are converted into spike trains by sensory neurons and projected to the neurons in subcortical and cortical layers for further processing. Reservoir Computing (RC) is a neural computation paradigm that is inspired by cortical Neural Networks (NN). It is promising for real-time, on-line computation of spatio-temporal signals. An RC system incorporates a Recurrent Neural Network (RNN) called reservoir, the state of which is changed by a trajectory of perturbations caused by a spatio-temporal input sequence. A trained, non- recurrent, linear readout-layer interprets the dynamics of the reservoir over time. Echo-State Network (ESN) [1] and Liquid-State Machine (LSM) [2] are two popular and canonical types of RC system. The former uses non-spiking analog sigmoidal neurons – and, more recently, Leaky Integrator (LI) neurons – and a normalized random connectivity matrix in the reservoir. Whereas, the reservoir in the latter is composed of Leaky Integrate-and-Fire (LIF) neurons, distributed in a 3-D space, which are connected with dynamic synapses through a probability function. The major difference between analog neurons and spiking neurons is in their neuron model dynamics and their inter-neuron communication mechanism. However, RC systems share a mysterious common property: they exhibit the best performance when reservoir dynamics undergo a criticality [1–6] – governed by the reservoirs’ connectivity parameters, |λmax| ≈ 1 in ESN, λ ≈ 2 and w in LSM – which is referred to as the edge of chaos in [3–5]. In this study, we are interested in exploring the possible reasons for this commonality, despite the differences imposed by different neuron types in the reservoir dynamics. We address this concern from the perspective of the information representation in both spiking and non-spiking reservoirs. We measure the Mutual Information (MI) between the state of the reservoir and a spatio-temporal spike-trains input, as well as that, between the reservoir and a linearly inseparable function of the input, temporal parity. In addition, we derive Mean Cumulative Mutual Information (MCMI) quantity from MI to measure the amount of stable memory in the reservoir and its correlation with the temporal parity task performance. We complement our investigation by conducting isolated spoken-digit recognition and spoken-digit sequence-recognition tasks. We hypothesize that a performance analysis of these two tasks will agree with our MI and MCMI results with regard to the impact of stable memory in task performance. It turns out that, in all reservoir types and in all the tasks conducted, reservoir performance peaks when the amount of stable memory in the reservoir is maxi-mized. Likewise, in the chaotic regime (when the network connectivity parameter is greater than a critical value), the absence of stable memory in the reservoir seems to be an evident cause for performance decrease in all conducted tasks. Our results also show that the reservoir with LIF neurons possess a higher stable memory of the input (quantified by input-reservoir MCMI) and outperforms the reservoirs with analog sigmoidal and LI neurons in processing the temporal parity and spoken-digit recognition tasks. From an efficiency stand point, the reservoir with 100 LIF neurons outperforms the reservoir with 500 LI neurons in spoken- digit recognition tasks. The sigmoidal reservoir falls short of solving this task. The optimum input-reservoir MCMI’s and output-reservoir MCMI’s we obtained for the reservoirs with LIF, LI, and sigmoidal neurons are 4.21, 3.79, 3.71, and 2.92, 2.51, and 2.47 respectively. In our isolated spoken-digits recognition experiments, the maximum achieved mean-performance by the reservoirs with N = 500 LIF, LI, and sigmoidal neurons are 97%, 79% and 2% respectively. The reservoirs with N = 100 neurons could solve the task with 80%, 68%, and 0.9% respectively. Our study sheds light on the impact of the information representation and memory of the reservoir on the performance of RC systems. The results of our experiments reveal the advantage of using LIF neurons in RC systems for computing spike-trains to solve memory demanding, real-world, spatio-temporal problems. Our findings have applications in engineering nano-electronic RC systems that can be used to solve real-world spatio-temporal problems.
Style APA, Harvard, Vancouver, ISO itp.
27

Babalola, Olubi Oluyomi. "A model based framework for semantic interpretation of architectural construction drawings". Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/47553.

Pełny tekst źródła
Streszczenie:
The study addresses the automated translation of architectural drawings from 2D Computer Aided Drafting (CAD) data into a Building Information Model (BIM), with emphasis on the nature, possible role, and limitations of a drafting language Knowledge Representation (KR) on the problem and process. The central idea is that CAD to BIM translation is a complex diagrammatic interpretation problem requiring a domain (drafting language) KR to render it tractable and that such a KR can take the form of an information model. Formal notions of drawing-as-language have been advanced and studied quite extensively for close to 25 years. The analogy implicitly encourages comparison between problem structures in both domains, revealing important similarities and offering guidance from the more mature field of Natural Language Understanding (NLU). The primary insight we derive from NLU involves the central role that a formal language description plays in guiding the process of interpretation (inferential reasoning), and the notable absence of a comparable specification for architectural drafting. We adopt a modified version of Engelhard's approach which expresses drawing structure in terms of a symbol set, a set of relationships, and a set of compositional frameworks in which they are composed. We further define an approach for establishing the features of this KR, drawing upon related work on conceptual frameworks for diagrammatic reasoning systems. We augment this with observation of human subjects performing a number of drafting interpretation exercises and derive some understanding of its inferential nature therefrom. We consider this indicative of the potential range of inferential processes a computational drafting model should ideally support. The KR is implemented as an information model using the EXPRESS language because it is in the public domain and is the implementation language of the target Industry Foundation Classes (IFC) model. We draw extensively from the IFC library to demonstrate that it can be applied in this manner, and apply the MVD methodology in defining the scope and interface of the DOM and IFC. This simplifies the IFC translation process significantly and minimizes the need for mapping. We conclude on the basis of selective implementations that a model reflecting the principles and features we define can indeed provide needed and otherwise unavailable support in drafting interpretation and other problems involving reasoning with this class of diagrammatic representations.
Style APA, Harvard, Vancouver, ISO itp.
28

Muzondo, Shingirirai. "Knowledge production in a think tank: a case study of the Africa Institute of South Africa (AISA)". Thesis, University of Fort Hare, 2009. http://hdl.handle.net/10353/252.

Pełny tekst źródła
Streszczenie:
The study sought to investigate the system of knowledge production at AISA and assess the challenges of producing knowledge at the institution. The objectives of the study were to: identify AISA‟s main achievements in knowledge production; determine AISA‟s challenges in producing knowledge; find out how AISA‟s organizational culture impacts on internal knowledge production; and suggest ways of improving knowledge production at AISA. A case study was used as a research method and purposive sampling used to select 50 cases out of a study population of 70. Questionnaires were prepared and distributed to AISA employees and where possible face-to-face interviews were conducted. Both quantitative and qualitative methods were used to analyze the data which were collected. Findings of the study may be used by governments across sub-Saharan Africa to produce relevant knowledge for formulating and implementing economic, social and technological policies. It is also important in identifying challenges that may hinder the successful production of knowledge. The study revealed that AISA has a well defined system of knowledge production and has had many achievements that have contributed to its relevance as a think tank today. The study found out that AISA has faced different challenges with the main one being organizational culture. From the findings, the researcher recommended that AISA should establish itself as a knowledge-based organization. It should also create a knowledge friendly culture as a framework for addressing the issue of organizational culture.
Style APA, Harvard, Vancouver, ISO itp.
29

Milette, Greg P. "Analogical matching using device-centric and environment-centric representations of function". Link to electronic thesis, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-050406-145255/.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: Analogy, Design, Functional Modeling, Functional Reasoning, Knowledge Representation, Repertory Grid, SME, Structure Mapping Engine, AI in design. Includes bibliographical references (p.106).
Style APA, Harvard, Vancouver, ISO itp.
30

Waller, David A. "An assessment of individual differences in spatial knowledge of real and virtual environments /". Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/9049.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

McCallum, Simon, i n/a. "Catastrophic forgetting and the pseudorehearsal solution in Hopfield networks". University of Otago. Department of Computer Sciences, 2007. http://adt.otago.ac.nz./public/adt-NZDU20080130.105101.

Pełny tekst źródła
Streszczenie:
Most artificial neural networks suffer from the problem of catastrophic forgetting, where previously learnt information is suddenly and completely lost when new information is learnt. Memory in real neural systems does not appear to suffer from this unusual behaviour. In this thesis we discuss the problem of catastrophic forgetting in Hopfield networks, and investigate various potential solutions. We extend the pseudorehearsal solution of Robins (1995) enabling it to work in this attractor network, and compare the results with the unlearning procedure proposed by Crick and Mitchison (1983). We then explore a familiarity measure based on the energy profile of the learnt patterns. By using the ratio of high energy to low energy parts of the network we can robustly distinguish the learnt patterns from the large number of spurious "fantasy" patterns that are common in these networks. This energy ratio measure is then used to improve the pseudorehearsal solution so that it can store 0.3N patterns in the Hopfield network, significantly more than previous proposed solutions to catastrophic forgetting. Finally, we explore links between the mechanisms investigated in this thesis and the consolidation of newly learnt material during sleep.
Style APA, Harvard, Vancouver, ISO itp.
32

Ren, Yuan. "Tractable reasoning with quality guarantee for expressive description logics". Thesis, University of Aberdeen, 2014. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=217884.

Pełny tekst źródła
Streszczenie:
DL-based ontologies have been widely used as knowledge infrastructures in knowledge management systems and on the Semantic Web. The development of efficient, sound and complete reasoning technologies has been a central topic in DL research. Recently, the paradigm shift from professional to novice users, and from standalone and static to inter-linked and dynamic applications raises new challenges: Can users build and evolve ontologies, both static and dynamic, with features provided by expressive DLs, while still enjoying e cient reasoning as in tractable DLs, without worrying too much about the quality (soundness and completeness) of results? To answer these challenges, this thesis investigates the problem of tractable and quality-guaranteed reasoning for ontologies in expressive DLs. The thesis develops syntactic approximation, a consequence-based reasoning procedure with worst-case PTime complexity, theoretically sound and empirically high-recall results, for ontologies constructed in DLs more expressive than any tractable DL. The thesis shows that a set of semantic completeness-guarantee conditions can be identifed to efficiently check if such a procedure is complete. Many ontologies tested in the thesis, including difficult ones for an off-the-shelf reasoner, satisfy such conditions. Furthermore, the thesis presents a stream reasoning mechanism to update reasoning results on dynamic ontologies without complete re-computation. Such a mechanism implements the Delete-and-Re-derive strategy with a truth maintenance system, and can help to reduce unnecessary over-deletion and re-derivation in stream reasoning and to improve its efficiency. As a whole, the thesis develops a worst-case tractable, guaranteed sound, conditionally complete and empirically high-recall reasoning solution for both static and dynamic ontologies in expressive DLs. Some techniques presented in the thesis can also be used to improve the performance and/or completeness of other existing reasoning solutions. The results can further be generalised and extended to support a wider range of knowledge representation formalisms, especially when a consequence-based algorithm is available.
Style APA, Harvard, Vancouver, ISO itp.
33

Desbiens, Charles. "Base de connaissances pour la supervision de procédés /". Thèse, Chicoutimi : Université du Québec à Chicoutimi, 1992. http://theses.uqac.ca.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Sarkar, Somwrita. "Acquiring symbolic design optimization problem reformulation knowledge". Connect to full text, 2009. http://hdl.handle.net/2123/5683.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of Sydney, 2009.
Title from title screen (viewed November 13, 2009). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the Faculty of Architecture, Design and Planning in the Faculty of Science. Includes graphs and tables. Includes bibliographical references. Also available in print form.
Style APA, Harvard, Vancouver, ISO itp.
35

Aucher, Guillaume, i n/a. "Perspectives on belief and change". University of Otago. Department of Computer Science, 2008. http://adt.otago.ac.nz./public/adt-NZDU20081003.115428.

Pełny tekst źródła
Streszczenie:
This thesis is about logical models of belief (and knowledge) representation and belief change. This means that we propose logical systems which are intended to represent how agents perceive a situation and reason about it, and how they update their beliefs about this situation when events occur. These agents can be machines, robots, human beings. . . but they are assumed to be somehow autonomous. The way a fixed situation is perceived by agents can be represented by statements about the agents� beliefs: for example �agent A believes that the door of the room is open� or �agent A believes that her colleague is busy this afternoon�. �Logical systems� means that agents can reason about the situation and their beliefs about it: if agent A believes that her colleague is busy this afternoon then agent A infers that he will not visit her this afternoon. We moreover often assume that our situations involve several agents which interact between each other. So these agents have beliefs about the situation (such as �the door is open�) but also about the other agents� beliefs: for example agent A might believe that agent B believes that the door is open. These kinds of beliefs are called higher-order beliefs. Epistemic logic [Hintikka, 1962; Fagin et al., 1995; Meyer and van der Hoek, 1995], the logic of belief and knowledge, can capture all these phenomena and will be our main starting point to model such fixed (�static�) situations. Uncertainty can of course be expressed by beliefs and knowledge: for example agent A being uncertain whether her colleague is busy this afternoon can be expressed by �agent A does not know whether her colleague is busy this afternoon�. But we sometimes need to enrich and refine the representation of uncertainty: for example, even if agent A does not know whether her colleague is busy this afternoon, she might consider it more probable that he is actually busy. So other logics have been developed to deal more adequately with the representation of uncertainty, such as probabilistic logic, fuzzy logic or possibilistic logic, and we will refer to some of them in this thesis (see [Halpern, 2003] for a survey on reasoning about uncertainty). But things become more complex when we introduce events and change in the picture. Issues arise even if we assume that there is a single agent. Indeed, if the incoming information conveyed by the event is coherent with the agent�s beliefs then the agent can just add it to her beliefs. But if the incoming information contradicts the agent�s beliefs then the agent has somehow to revise her beliefs, and as it turns out there is no obvious way to decide what should be her resulting beliefs. Solving this problem was the goal of the logic-based belief revision theory developed by Alchourrón, Gärdenfors and Makinson (to which we will refer by the term AGM) [Alchourrón et al., 1985; Gärdenfors, 1988; Gärdenfors and Rott, 1995]. Their idea is to introduce �rationality postulates� that specify which belief revision operations can be considered as being �rational� or reasonable, and then to propose specific revision operations that fulfill these postulates. However, AGM does not consider situations where the agent might also have some uncertainty about the incoming information: for example agent A might be uncertain due to some noise whether her colleague told her that he would visit her on Tuesday or on Thursday. In this thesis we also investigate this kind of phenomenon. Things are even more complex in a multi-agent setting because the way agents update their beliefs depends not only on their beliefs about the event itself but also on their beliefs about the way the other agents perceived the event (and so about the other agents� beliefs about the event). For example, during a private announcement of a piece of information to agent A the beliefs of the other agents actually do not change because they believe nothing is actually happening; but during a public announcement all the agents� beliefs might change because they all believe that an announcement has been made. Such kind of subtleties have been dealt with in a field called dynamic epistemic logic (Gerbrandy and Groeneveld, 1997; Baltag et al., 1998; van Ditmarsch et al., 2007b]. The idea is to represent by an event model how the event is perceived by the agents and then to define a formal update mechanism that specifies how the agents update their beliefs according to this event model and their previous representaton of the situation. Finally, the issues concerning belief revision that we raised in the single agent case are still present in the multi-agent case. So this thesis is more generally about information and information change. However, we will not deal with problems of how to store information in machines or how to actually communicate information. Such problems have been dealt with in information theory [Cover and Thomas, 1991] and Kolmogorov complexity theory [Li and Vitányi, 1993]. We will just assume that such mechanisms are already available and start our investigations from there. Studying and proposing logical models for belief change and belief representation has applications in several areas. First in artificial intelligence, where machines or robots need to have a formal representation of the surrounding world (which might involve other agents), and formal mechanisms to update this representation when they receive incoming information. Such formalisms are crucial if we want to design autonomous agents, able to act autonomously in the real world or in a virtual world (such as on the internet). Indeed, the representation of the surrounding world is essential for a robot in order to reason about the world, plan actions in order to achieve goals... and it must be able to update and revise its representation of the world itself in order to cope autonomously with unexpected events. Second in game theory (and consequently in economics), where we need to model games involving several agents (players) having beliefs about the game and about the other agents� beliefs (such as agent A believes that agent B has the ace of spade, or agent A believes that agent B believes that agent A has the ace of heart...), and how they update their representation of the game when events (such as showing privately a card or putting a card on the table) occur. Third in cognitive psychology, where we need to model as accurately as possible epistemic state of human agents and the dynamics of belief and knowledge in order to explain and describe cognitive processes. The thesis is organized as follows. In Chapter 2, we first recall epistemic logic. Then we observe that representing an epistemic situation involving several agents depends very much on the modeling point of view one takes. For example, in a poker game the representation of the game will be different depending on whether the modeler is a poker player playing in the game or the card dealer who knows exactly what the players� cards are. In this thesis, we will carefully distinguish these different modeling approaches and the. different kinds of formalisms they give rise to. In fact, the interpretation of a formalism relies quite a lot on the nature of these modeling points of view. Classically, in epistemic logic, the models built are supposed to be correct and represent the situation from an external and objective point of view. We call this modeling approach the perfect external approach. In Chapter 2, we study the modeling point of view of a particular modeler-agent involved in the situation with other agents (and so having a possibly erroneous perception of the situation). We call this modeling approach the internal approach. We propose a logical formalism based on epistemic logic that this agent uses to represent �for herself� the surrounding world. We then set some formal connections between the internal approach and the (perfect) external approach. Finally we axiomatize our logical formalism and show that the resulting logic is decidable. In Chapter 3, we first recall dynamic epistemic logic as viewed by Baltag, Moss and Solecki (to which we will refer by the term BMS). Then we study in which case seriality of the accessibility relations of epistemic models is preserved during an update, first for the full updated model and then for generated submodels of the full updated model. Finally, observing that the BMS formalism follows the (perfect) external approach, we propose an internal version of it, just as we proposed an internal version of epistemic logic in Chapter 2. In Chapter 4, we still follow the internal approach and study the particular case where the event is a private announcement. We first show, thanks to our study in Chapter 3, that in a multi-agent setting, expanding in the AGM style corresponds to performing a private announcement in the BMS style. This indicates that generalizing AGM belief revision theory to a multi-agent setting amounts to study private announcement. We then generalize the AGM representation theorems to the multi-agent case. Afterwards, in the spirit of the AGM approach, we go beyond the AGM postulates and investigate multi-agent rationality postulates specific to our multi-agent setting inspired from the fact that the kind of phenomenon we study is private announcement. Finally we provide an example of revision operation that we apply to a concrete example. In Chapter 5, we follow the (perfect) external approach and enrich the BMS formalism with probabilities. This enables us to provide a fined-grained account of how human agents interpret events involving uncertainty and how they revise their beliefs. Afterwards, we review different principles for the notion of knowledge that have been proposed in the literature and show how some principles that we argue to be reasonable ones can all be captured in our rich and expressive formalism. Finally, we extend our general formalism to a multi-agent setting. In Chapter 6, we still follow the (perfect) external approach and enrich our dynamic epistemic language with converse events. This language is interpreted on structures with accessibility relations for both beliefs and events, unlike the BMS formalism where events and beliefs are not on the same formal level. Then we propose principles relating events and beliefs and provide a complete characterization, which yields a new logic EDL. Finally, we show that BMS can be translated into our new logic EDL thanks to the converse operator: this device enables us to translate the structure of the event model directly within a particular axiomatization of EDL, without having to refer to a particular event model in the language (as done in BMS). In Chapter 7 we summarize our results and give an overview of remaining technical issues and some desiderata for future directions of research. Parts of this thesis are based on publication, but we emphasize that they have been entirely rewritten in order to make this thesis an integrated whole. Sections 4.2.2 and 4.3 of Chapter 4 are based on [Aucher, 2008]. Sections 5.2, 5.3 and 5.5 of Chapter 5 are based on [Aucher, 2007]. Chapter 6 is based on [Aucher and Herzig, 2007].
Style APA, Harvard, Vancouver, ISO itp.
36

De, Kock Erika. "Decentralising the codification of rules in a decision support expert knowledge base". Pretoria : [s.n.], 2003. http://upetd.up.ac.za/thesis/available/etd-03042004-105746.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Brennan, Jane Computer Science &amp Engineering Faculty of Engineering UNSW. "A framework for modelling spatial proximity". Publisher:University of New South Wales. Computer Science & Engineering, 2009. http://handle.unsw.edu.au/1959.4/43311.

Pełny tekst źródła
Streszczenie:
The concept of proximity is an important aspect of human reasoning. Despite the diversity of applications that require proximity measures, the most intuitive notion is that of spatial nearness. The aim of this thesis is to investigate the underpinnings of the notion of nearness, propose suitable formalisations and apply them to the processing of GIS data. More particularly, this work offers a framework for spatial proximity that supports the development of more intuitive tools for users of geographic data processing applications. Many of the existing spatial reasoning formalisms do not account for proximity at all while others stipulate it by using natural language expressions as symbolic values. Some approaches suggest the association of spatial relations with fuzzy membership grades to be calculated for locations in a map using Euclidean distance. However, distance is not the only factor that influences nearness perception. Hence, previous work suggests that nearness should be defined from a more basic notion of influence area. I argue that this approach is flawed, and that nearness should rather be defined from a new, richer notion of impact area that takes both the nature of an object and the surrounding environment into account. A suitable notion of nearness considers the impact areas of both objects whose degree of nearness is assessed. This is opposed to the common approach of only taking one of both objects, seen as a reference to assess the nearness of the other to it, into consideration. Cognitive findings are incorporated to make the framework more relevant to the users of Geographic Information Systems (GIS) with respect to their own spatial cognition. GIS users bring a wealth of knowledge about physical space, particularly geographic space, into the processing of GIS data. This is taken into account by introducing the notion of context. Context represents either an expert in the context field or information from the context field as collated by an expert. In order to evaluate and to show the practical implications of the framework, experiments are conducted on a GIS dataset incorporating expert knowledge from the Touristic Road Travel domain.
Style APA, Harvard, Vancouver, ISO itp.
38

Sarkar, Somwrita. "Acquiring symbolic design optimization problem reformulation knowledge: On computable relationships between design syntax and semantics". Thesis, The University of Sydney, 2009. http://hdl.handle.net/2123/5683.

Pełny tekst źródła
Streszczenie:
This thesis presents a computational method for the inductive inference of explicit and implicit semantic design knowledge from the symbolic-mathematical syntax of design formulations using an unsupervised pattern recognition and extraction approach. Existing research shows that AI / machine learning based design computation approaches either require high levels of knowledge engineering or large training databases to acquire problem reformulation knowledge. The method presented in this thesis addresses these methodological limitations. The thesis develops, tests, and evaluates ways in which the method may be employed for design problem reformulation. The method is based on the linear algebra based factorization method Singular Value Decomposition (SVD), dimensionality reduction and similarity measurement through unsupervised clustering. The method calculates linear approximations of the associative patterns of symbol cooccurrences in a design problem representation to infer induced coupling strengths between variables, constraints and system components. Unsupervised clustering of these approximations is used to identify useful reformulations. These two components of the method automate a range of reformulation tasks that have traditionally required different solution algorithms. Example reformulation tasks that it performs include selection of linked design variables, parameters and constraints, design decomposition, modularity and integrative systems analysis, heuristically aiding design “case” identification, topology modeling and layout planning. The relationship between the syntax of design representation and the encoded semantic meaning is an open design theory research question. Based on the results of the method, the thesis presents a set of theoretical postulates on computable relationships between design syntax and semantics. The postulates relate the performance of the method with empirical findings and theoretical insights provided by cognitive neuroscience and cognitive science on how the human mind engages in symbol processing and the resulting capacities inherent in symbolic representational systems to encode “meaning”. The performance of the method suggests that semantic “meaning” is a higher order, global phenomenon that lies distributed in the design representation in explicit and implicit ways. A one-to-one local mapping between a design symbol and its meaning, a largely prevalent approach adopted by many AI and learning algorithms, may not be sufficient to capture and represent this meaning. By changing the theoretical standpoint on how a “symbol” is defined in design representations, it was possible to use a simple set of mathematical ideas to perform unsupervised inductive inference of knowledge in a knowledge-lean and training-lean manner, for a knowledge domain that traditionally relies on “giving” the system complex design domain and task knowledge for performing the same set of tasks.
Style APA, Harvard, Vancouver, ISO itp.
39

Lindsay, Jeffrey Thomas. "The effect of a simultaneous speech discrimination task on navigation in a virtual". Thesis, Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-04102006-103948/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Yaner, Patrick William. "From Shape to Function: Acquisition of Teleological Models from Design Drawings by Compositional Analogy". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19791.

Pełny tekst źródła
Streszczenie:
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2008.
Committee Chair: Goel, Ashok; Committee Member: Eastman, Charles; Committee Member: Ferguson, Ronald; Committee Member: Glasgow, Janice; Committee Member: Nersessian, Nancy; Committee Member: Ram, Ashwin.
Style APA, Harvard, Vancouver, ISO itp.
41

Reul, Quentin H. "Role of description logic reasoning in ontology matching". Thesis, University of Aberdeen, 2012. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=186278.

Pełny tekst źródła
Streszczenie:
Semantic interoperability is essential on the Semantic Web to enable different information systems to exchange data. Ontology matching has been recognised as a means to achieve semantic interoperability on the Web by identifying similar information in heterogeneous ontologies. Existing ontology matching approaches have two major limitations. The first limitation relates to similarity metrics, which provide a pessimistic value when considering complex objects such as strings and conceptual entities. The second limitation relates to the role of description logic reasoning. In particular, most approaches disregard implicit information about entities as a source of background knowledge. In this thesis, we first present a new similarity function, called the degree of commonality coefficient, to compute the overlap between two sets based on the similarity between their elements. The results of our evaluations show that the degree of commonality performs better than traditional set similarity metrics in the ontology matching task. Secondly, we have developed the Knowledge Organisation System Implicit Mapping (KOSIMap) framework, which differs from existing approaches by using description logic reasoning (i) to extract implicit information as background knowledge for every entity, and (ii) to remove inappropriate correspondences from an alignment. The results of our evaluation show that the use of Description Logic in the ontology matching task can increase coverage. We identify people interested in ontology matching and reasoning techniques as the target audience of this work
Style APA, Harvard, Vancouver, ISO itp.
42

Tehan, Jennifer R. "Age-related differences in deceit detection the role of emotion recognition /". Thesis, Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-04102006-110201/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Zimanyi, Esteban. "Incomplete and uncertain information in relational databases". Doctoral thesis, Universite Libre de Bruxelles, 1992. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212914.

Pełny tekst źródła
Streszczenie:

In real life it is very often the case that the available knowledge is imperfect in the sense that it represents multiple possible states of the external world, yet it is unknown which state corresponds to the actual situation of the world. Imperfect knowledge can be of two different categories. Knowledge is incomplete if it represents different states, one of which is true in the external world. On the contrary, knowledge is uncertain if it represents different states which may be satisfied or are likely to be true in the external world.

Imperfect knowledge can be considered under two different perspectives: using either an algebraic or a logical approach. We present both approaches in relation with the standard relational model, providing the necessary background for the subsequent development.

The study of imperfect knowledge has been an active area of research, in particular in the context of relational databases. However, due to the complexity of manipulating imperfect knowledge, little practical results have been obtained so far. In this thesis we provide a survey of the field of incompleteness and uncertainty in relational databases;it can be used also as an introductory tutorial for understanding the intuitive semantics and the problems encountered when representing and manipulating such imperfect knowledge. The survey concentrates in giving an unifying presentation of the different approaches and results found in the literature, thus providing a state of the art in the field.

The rest of the thesis studies in detail the manipulation of one type of incomplete knowledge, namely disjunctive information, and one type of uncertain knowledge, namely probabilistic information. We study both types of imperfect knowledge using similar approaches, that is through an algebraic and a logical framework. The relational algebra operators are generalized for disjunctive and probabilistic relations, and we prove the correctness of these generalizations. In addition, disjunctive and probabilistic databases are formalized using appropriate logical theories and we give sound and complete query evaluation algorithms.

A major implication of these studies is the conviction that viewing incompleteness and uncertainty as different facets of the same problem would allow to achieve a deeper understanding of imperfect knowledge, which is absolutely necessary for building information systems capable of modeling complex real-life situations.


Doctorat en sciences, Spécialisation Informatique
info:eu-repo/semantics/nonPublished
Style APA, Harvard, Vancouver, ISO itp.
44

Lakkaraju, Sai Kiran. "A SLDNF formalization for updates and abduction /". View thesis View thesis, 2001. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030507.112018/index.html.

Pełny tekst źródła
Streszczenie:
Thesis (M.Sc. (Hons.)) -- University of Western Sydney, 2001.
"A thesis submitted for the degree of Master of Science (Honours) - Computing and Information Technology at University of Western Sydney" Bibliography : leaves 93-98.
Style APA, Harvard, Vancouver, ISO itp.
45

Lister, Kendall. "Toward semantic interoperability for software systems". Connect to thesis, 2008. http://repository.unimelb.edu.au/10187/3594.

Pełny tekst źródła
Streszczenie:
“In an ill-structured domain you cannot, by definition, have a pre-compiled schema in your mind for every circumstance and context you may find ... you must be able to flexibly select and arrange knowledge sources to most efficaciously pursue the needs of a given situation.” [57]
In order to interact and collaborate effectively, agents, whether human or software, must be able to communicate through common understandings and compatible conceptualisations. Ontological differences that occur either from pre-existing assumptions or as side-effects of the process of specification are a fundamental obstacle that must be overcome before communication can occur. Similarly, the integration of information from heterogeneous sources is an unsolved problem. Efforts have been made to assist integration, through both methods and mechanisms, but automated integration remains an unachieved goal. Communication and information integration are problems of meaning and interaction, or semantic interoperability. This thesis contributes to the study of semantic interoperability by identifying, developing and evaluating three approaches to the integration of information. These approaches have in common that they are lightweight in nature, pragmatic in philosophy and general in application.
The first work presented is an effort to integrate a massive, formal ontology and knowledge-base with semi-structured, informal heterogeneous information sources via a heuristic-driven, adaptable information agent. The goal of the work was to demonstrate a process by which task-specific knowledge can be identified and incorporated into the massive knowledge-base in such a way that it can be generally re-used. The practical outcome of this effort was a framework that illustrates a feasible approach to providing the massive knowledge-base with an ontologically-sound mechanism for automatically generating task-specific information agents to dynamically retrieve information from semi-structured information sources without requiring machine-readable meta-data.
The second work presented is based on reviving a previously published and neglected algorithm for inferring semantic correspondences between fields of tables from heterogeneous information sources. An adapted form of the algorithm is presented and evaluated on relatively simple and consistent data collected from web services in order to verify the original results, and then on poorly-structured and messy data collected from web sites in order to explore the limits of the algorithm. The results are presented via standard measures and are accompanied by detailed discussions on the nature of the data encountered and an analysis of the strengths and weaknesses of the algorithm and the ways in which it complements other approaches that have been proposed.
Acknowledging the cost and difficulty of integrating semantically incompatible software systems and information sources, the third work presented is a proposal and a working prototype for a web site to facilitate the resolving of semantic incompatibilities between software systems prior to deployment, based on the commonly-accepted software engineering principle that the cost of correcting faults increases exponentially as projects progress from phase to phase, with post-deployment corrections being significantly more costly than those performed earlier in a project’s life. The barriers to collaboration in software development are identified and steps taken to overcome them. The system presented draws on the recent collaborative successes of social and collaborative on-line projects such as SourceForge, Del.icio.us, digg and Wikipedia and a variety of techniques for ontology reconciliation to provide an environment in which data definitions can be shared, browsed and compared, with recommendations automatically presented to encourage developers to adopt data definitions compatible with previously developed systems.
In addition to the experimental works presented, this thesis contributes reflections on the origins of semantic incompatibility with a particular focus on interaction between software systems, and between software systems and their users, as well as detailed analysis of the existing body of research into methods and techniques for overcoming these problems.
Style APA, Harvard, Vancouver, ISO itp.
46

Romariz, Alexandre Ricardo Soares. "Representação e aquisição de regras em sistemas conexionistas". [s.n.], 1995. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259557.

Pełny tekst źródła
Streszczenie:
Orientador: Marcio Luiz de Andrade Netto
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica
Made available in DSpace on 2018-07-19T21:28:22Z (GMT). No. of bitstreams: 1 Romariz_AlexandreRicardoSoares_M.pdf: 5855846 bytes, checksum: 4b7828a3a6355b3204207faea24876f4 (MD5) Previous issue date: 1995
Resumo: Este trabalho trata da representação de conhecimento estruturado (na forma de regras) em sistemas conexionistas. Primeiramente, é feito um estudo sobre redes conexionistas modulares, nas quais grupos de neurônios podem ser associados a antecedentes e conseqüências de regras. Em seguida, mostram-se formas pelas quais estas redes são associadas a conceitos de lógica nebulosa, nos chamados sistemas neuronebulosos. Um algoritmo de aquisição incremental é proposto para tais sistemas. Nele, promove-se alteração estrutural e não apenas adaptação de parâmetros da rede. Novas regras vão sendo adicionadas para lidar com padrões ainda não cobertos pelas regras existentes. O erro decorrente da aplicação de uma regra é usado como indicador da função de pertinência da mesma ... Observação: O resumo, na íntegra, poderá ser visualizado no texto completo da tese digital
Abstract: This work addresses the problem of representing structured knowledge (as a set of rules) in connectionist systems. First, modular connectionist networks are studied. In this kind of network, groups of neurons may be associated with rule antecedents or consequents. Next, we show some ways by which these networks are associated with fuzzy logic concepts (neuro-fuzzy systems). An algorithm for incremental rule is proposed for these systems. Strucutural modification as well as parameters adaptation are considered. New rules are added periodically to deal with patterns which are not yet convered by the existing rules. The error that results from the application of each rule is used as an indication for membership function construction ... Note: The complete abstract is available with the full electronic digital thesis or dissertations
Mestrado
Mestre em Engenharia Elétrica
Style APA, Harvard, Vancouver, ISO itp.
47

Silva, Julianne Teixeira e. "Noção de representação na Ciência da Informação: concepções a partir da filosofia de Arthur Schopenhauer". Universidade Federal da Paraíba, 2016. http://tede.biblioteca.ufpb.br:8080/handle/tede/9724.

Pełny tekst źródła
Streszczenie:
Submitted by Viviane Lima da Cunha (viviane@biblioteca.ufpb.br) on 2017-12-22T14:47:36Z No. of bitstreams: 1 arquivototal.pdf: 3952698 bytes, checksum: f54753dd1c6cca9dc04122e71944b318 (MD5)
Made available in DSpace on 2017-12-22T14:47:36Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 3952698 bytes, checksum: f54753dd1c6cca9dc04122e71944b318 (MD5) Previous issue date: 2016-12-09
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
It is presented an enlargement of the notion field of information representation in the context of Information Science, bringing the trace of epistemic trajectory, from Classical Antiquity and its western linear thought, passing by its heritages, until the perceived connections between Information Science and the thought of Arthur Schopenhauer. The thesis that it would be found such resource to the bases of this enlargement in the thought of Schopenhauer, which defends the comparison and reflection between the philosophic aspects of the thought of the author and the theoretical bases of Information Representation in the ambit of Information Science, in Brazil, it indicated, from the reflections and arguments enrolled, that there are relevant elements in the thought of Schopenhauer, in which there are points of theoretical convergence that can ground these debates in the ambit of Information Science, mainly in the Theory of Knowledge developed by Schopenhauer, especially in its notion of conceptual spheres.
Apresenta-se um alargamento do campo nocional da representação da informação no contexto da Ciência da Informação, trazendo-se o traço do caminhar epistêmico, desde a antiguidade clássica e seu pensamento ocidental linear, passando por suas heranças, até as conexões percebidas entre a Ciência da Informação e o pensamento de Arthur Schopenhauer. A tese de que seria encontrado tal recurso para as bases desse alargamento no pensamento de Schopenhauer, e que defende o cotejamento e reflexão entre os aspectos filosóficos do pensamento do autor e as bases teóricas da representação da informação no seio da Ciência da Informação, no Brasil, indicou, diante das reflexões e argumentos arrolados, que existem relevantes elementos no pensamento schopenhaueriano, nos quais há pontos de convergência teórica e que podem fundamentar estes debates no âmbito da Ciência da Informação, sobretudo em sua Teoria do Conhecimento, especialmente na sua noção de esferas conceituais.
Style APA, Harvard, Vancouver, ISO itp.
48

Verbancsics, Phillip. "Effective task transfer through indirect encoding". Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4716.

Pełny tekst źródła
Streszczenie:
An important goal for machine learning is to transfer knowledge between tasks. For example, learning to play RoboCup Keepaway should contribute to learning the full game of RoboCup soccer. Often approaches to task transfer focus on transforming the original representation to fit the new task. Such representational transformations are necessary because the target task often requires new state information that was not included in the original representation. In RoboCup Keepaway, changing from the 3 vs. 2 variant of the task to 4 vs. 3 adds state information for each of the new players. In contrast, this dissertation explores the idea that transfer is most effective if the representation is designed to be the same even across different tasks. To this end, (1) the bird's eye view (BEV) representation is introduced, which can represent different tasks on the same two-dimensional map. Because the BEV represents state information associated with positions instead of objects, it can be scaled to more objects without manipulation. In this way, both the 3 vs. 2 and 4 vs. 3 Keepaway tasks can be represented on the same BEV, which is (2) demonstrated in this dissertation.Yet a challenge for such representation is that a raw two-dimensional map is high-dimensional and unstructured. This dissertation demonstrates how this problem is addressed naturally by the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) approach. HyperNEAT evolves an indirect encoding, which compresses the representation by exploiting its geometry. The dissertation then explores further exploiting the power of such encoding, beginning by (3) enhancing the configuration of the BEV with a focus on modularity. The need for further nonlinearity is then (4) investigated through the addition of hidden nodes. Furthermore, (5) the size of the BEV can be manipulated because it is indirectly encoded.; Thus the resolution of the BEV, which is dictated by its size, is increased in precision and culminates in a HyperNEAT extension that is expressed at effectively infinite resolution. Additionally, scaling to higher resolutions through gradually increasing the size of the BEV is explored. Finally, (6) the ambitious problem of scaling from the Keepaway task to the Half-field Offense task is investigated with the BEV. Overall, this dissertation demonstrates that advanced representations in conjunction with indirect encoding can contribute to scaling learning techniques to more challenging tasks, such as the Half-field Offense RoboCup soccer domain.
ID: 030646258; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 144-152).
Ph.D.
Doctorate
Computer Science
Engineering and Computer Science
Computer Science
Style APA, Harvard, Vancouver, ISO itp.
49

Longo, Cristiano. "Set theory for knowledge representation". Doctoral thesis, Università di Catania, 2012. http://hdl.handle.net/10761/1031.

Pełny tekst źródła
Streszczenie:
The decision problem in set theory has been intensively investigated in the last decades, and decision procedures or proofs of undecidability have been provided for several quantified and unquantified fragments of set theory. In this thesis we study the decision problem for three novel quantified fragments of set theory, which allow the explicit manipulation of ordered pairs. We present a decision procedure for each language of this family, and prove that all of these procedures are optimal (in the sense that they run in nondeterministic polynomial-time) when restricted to formulae with quantifier nesting bounded by a constant. The expressive power of languages of this family is then measured in terms of set-theoretical constructs they allow to express. In addition, these languages can be profitably employed in knowledge representation, since they allow to express a large amount description logic constructs.
Style APA, Harvard, Vancouver, ISO itp.
50

Rubin, Eran. "Domain knowledge representation in information systems". Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/15229.

Pełny tekst źródła
Streszczenie:
Information Systems and software embed knowledge about the domain in which they operate. This knowledge can be very useful to various stakeholders in the organization, including developers, users, and other organizational workers. However, it is not readily accessible and usually intertwined with implementation details. Making this knowledge available would be beneficial for several reasons. In particular: 1) software often needs to be updated to reflect changes in the organization; this causes the embedded knowledge to stay current; 2) the actual system development process often incorporates the use of methods and techniques to properly record domain knowledge; 3) knowledge embedded in software is already available in a digital format; and 4) the tools typically used to manage system development (e.g. source and version controls) can be effective in management and control of knowledge. However, despite all these potential advantages, embedded knowledge is usually not readily accessible to knowledge seekers in the organization. This situation impedes the possible utilization of software-embedded knowledge. The objective of this dissertation is to develop ways of making software-embedded domain knowledge available, accessible, and usable to organizational users. The research challenge is to identify what domain knowledge is involved in systems development, to find ways to formalize it, and to demonstrate that it can be explicitly represented in developed systems. The research covers three main aspects: 1) identifying and formalizing embedded domain knowledge obtained in systems development processes; 2) developing methods for representing this knowledge formally to facilitate its use during and after system development and, 3) demonstrate how this knowledge can be explicitly represented in the final IS implementation code. The first aspect, namely the nature of embedded knowledge, is addressed by analyzing the requirements engineering, systems analysis, and enterprise modeling literature in order to identify the main constructs used for domain representation. Formalization is then accomplished using ontological analysis. The feasibility of explicit representation is attained by suggesting a Model Driven Architectures (MDA) where the formalized knowledge is used to drive processing in the system. Usability and usefulness of the ideas are demonstrated in two ways. First, case studies and examples show how domain knowledge acquired during extant methods of systems analysis can be represented using the proposed representation constructs. Second, a sample system design, supporting explicit domain knowledge representation in system code, is proposed and demonstrated via a simple prototype.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii