Academic literature on the topic 'ID. Knowledge representation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'ID. Knowledge representation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "ID. Knowledge representation"

1

Farooq, Ammarah, Muhammad Awais, Josef Kittler, and Syed Safwan Khalid. "AXM-Net: Implicit Cross-Modal Feature Alignment for Person Re-identification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (June 28, 2022): 4477–85. http://dx.doi.org/10.1609/aaai.v36i4.20370.

Full text
Abstract:
Cross-modal person re-identification (Re-ID) is critical for modern video surveillance systems. The key challenge is to align cross-modality representations conforming to semantic information present for a person and ignore background information. This work presents a novel convolutional neural network (CNN) based architecture designed to learn semantically aligned cross-modal visual and textual representations. The underlying building block, named AXM-Block, is a unified multi-layer network that dynamically exploits the multi-scale knowledge from both modalities and re-calibrates each modality according to shared semantics. To complement the convolutional design, contextual attention is applied in the text branch to manipulate long-term dependencies. Moreover, we propose a unique design to enhance visual part-based feature coherence and locality information. Our framework is novel in its ability to implicitly learn aligned semantics between modalities during the feature learning stage. The unified feature learning effectively utilizes textual data as a super-annotation signal for visual representation learning and automatically rejects irrelevant information. The entire AXM-Net is trained end-to-end on CUHK-PEDES data. We report results on two tasks, person search and cross-modal Re-ID. The AXM-Net outperforms the current state-of-the-art (SOTA) methods and achieves 64.44% Rank@1 on the CUHK-PEDES test set. It also outperforms by >10% for cross-viewpoint text-to-image Re-ID scenarios on CrossRe-ID and CUHK-SYSU datasets.
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Guile, and Shaogang Gong. "Generalising without Forgetting for Lifelong Person Re-Identification." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 2889–97. http://dx.doi.org/10.1609/aaai.v35i4.16395.

Full text
Abstract:
Existing person re-identification (Re-ID) methods mostly prepare all training data in advance, while real-world Re-ID data are inherently captured over time or from different locations, which requires a model to be incrementally generalised from sequential learning of piecemeal new data without forgetting what is already learned. In this work, we call this lifelong person Re-ID, characterised by solving a problem of unseen class identification subject to continuous new domain generalisation and adaptation with class imbalanced learning. We formulate a new Generalising without Forgetting method (GwFReID) for lifelong Re-ID and design a comprehensive learning objective that accounts for classification coherence, distribution coherence and representation coherence in a unified framework. This design helps to simultaneously learn new information, distil old knowledge and solve class imbalance, which enables GwFReID to incrementally improve model generalisation without catastrophic forgetting of what is already learned. Extensive experiments on eight Re-ID benchmarks, CIFAR-100 and ImageNet show the superiority of GwFReID over the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Vlaeminck, H., J. Vennekens, M. Denecker, and M. Bruynooghe. "An approximative inference method for solving ∃∀SO satisfiability problems." Journal of Artificial Intelligence Research 45 (September 25, 2012): 79–124. http://dx.doi.org/10.1613/jair.3658.

Full text
Abstract:
This paper considers the fragment ∃∀SO of second-order logic. Many interesting problems, such as conformant planning, can be naturally expressed as finite domain satisfiability problems of this logic. Such satisfiability problems are computationally hard (ΣP2) and many of these problems are often solved approximately. In this paper, we develop a general approximative method, i.e., a sound but incomplete method, for solving ∃∀SO satisfiability problems. We use a syntactic representation of a constraint propagation method for first-order logic to transform such an ∃∀SO satisfiability problem to an ∃SO(ID) satisfiability problem (second-order logic, extended with inductive definitions). The finite domain satisfiability problem for the latter language is in NP and can be handled by several existing solvers. Inductive definitions are a powerful knowledge representation tool, and this moti- vates us to also approximate ∃∀SO(ID) problems. In order to do this, we first show how to perform propagation on such inductive definitions. Next, we use this to approximate ∃∀SO(ID) satisfiability problems. All this provides a general theoretical framework for a number of approximative methods in the literature. Moreover, we also show how we can use this framework for solving practical useful problems, such as conformant planning, in an effective way.
APA, Harvard, Vancouver, ISO, and other styles
4

Buchnat, Marzena, and Aleksandra Jasielska. "Knowledge about anger in children with a mild intellectual disability." International Journal of Special Education (IJSE) 37, no. 2 (December 4, 2022): 92–105. http://dx.doi.org/10.52291/ijse.2022.37.43.

Full text
Abstract:
"The knowledge of children with a mild intellectual disability (ID) is less complex and poorer than that of their peers in the intellectual norm (IN). The aim of this study was to characterize knowledge about anger in children with mild intellectual disabilities. The study used the authoring tool to measure children’s knowledge of emotions, including anger. This tool facilitated the exploration of the cognitive representation of the basic emotions available in three codes (which perform the functions of perception, expression, and understanding) and the interconnections between them. Children in the intellectual norm (N = 30) and children with mild intellectual disabilities (N = 30) participated in the study. The results mainly indicated differences in how anger was understood by particular groups, to the detriment of children with a disability. The results were largely determined by the child’s level of organization of knowledge about anger and accompanying mental operations. "
APA, Harvard, Vancouver, ISO, and other styles
5

Humayun, Shamim, Shabana Sartaj, and Waqar Ali Shah. "Exploring Linguistic Representation of Women on Facebook: A Study in Pakistani Context." Advances in Language and Literary Studies 10, no. 2 (April 30, 2019): 152. http://dx.doi.org/10.7575/aiac.alls.v.10n.2p.152.

Full text
Abstract:
Facebook is an online platform where people form a self-presentation and construct an identity through a personal ID or profile. Use of this social website fully depicts the norms and perceptions of patriarchal society. This study aims to explore the waymen of society speaks to the women as well as the speaking of women with other women that reflects an idea of women in Pakistani society on Facebook through quantitative study. The Analysis revealed that the women, in Pakistani society in particular and elsewhere in general is linguistically expressed as a being that is not equal to men in intellect, bodily strength, wisdom, knowledge, forbearance and bravery thus dubbed as timid, weak, sexy, fool, cute hot and so on by society as a whole.
APA, Harvard, Vancouver, ISO, and other styles
6

HOU, PING, BROES DE CAT, and MARC DENECKER. "FO(FD): Extending classical logic with rule-based fixpoint definitions." Theory and Practice of Logic Programming 10, no. 4-6 (July 2010): 581–96. http://dx.doi.org/10.1017/s1471068410000293.

Full text
Abstract:
AbstractWe introduce fixpoint definitions, a rule-based reformulation of fixpoint constructs. The logic FO(FD), an extension of classical logic with fixpoint definitions, is defined. We illustrate the relation between FO(FD) and FO(ID), which is developed as an integration of two knowledge representation paradigms. The satisfiability problem for FO(FD) is investigated by first reducing FO(FD) to difference logic and then using solvers for difference logic. These reductions are evaluated in the computation of models for FO(FD) theories representing fairness conditions and we provide potential applications of FO(FD).
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Di, Yujuan Si, Weiyi Yang, Gong Zhang, and Jia Li. "A Novel Electrocardiogram Biometric Identification Method Based on Temporal-Frequency Autoencoding." Electronics 8, no. 6 (June 12, 2019): 667. http://dx.doi.org/10.3390/electronics8060667.

Full text
Abstract:
For good performance, most existing electrocardiogram (ECG) identification methods still need to adopt a denoising process to remove noise interference beforehand. This specific signal preprocessing technique requires great efforts for algorithm engineering and is usually complicated and time-consuming. To more conveniently remove the influence of noise interference and realize accurate identification, a novel temporal-frequency autoencoding based method is proposed. In particular, the raw data is firstly transformed into the wavelet domain, where multi-level time-frequency representation is achieved. Then, a prior knowledge-based feature selection is proposed and applied to the transformed data to discard noise components and retain identity-related information simultaneously. Afterward, the stacked sparse autoencoder is introduced to learn intrinsic discriminative features from the selected data, and Softmax classifier is used to perform the identification task. The effectiveness of the proposed method is evaluated on two public databases, namely, ECG-ID and Massachusetts Institute of Technology-Biotechnology arrhythmia (MIT-BIH-AHA) databases. Experimental results show that our method can achieve high multiple-heartbeat identification accuracies of 98.87%, 92.3%, and 96.82% on raw ECG signals which are from the ECG-ID (Two-recording), ECG-ID (All-recording), and MIT-BIH-AHA database, respectively, indicating that our method can provide an efficient way for ECG biometric identification.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Zhifeng, Wenxing Yan, Chunyan Zeng, Yuan Tian, and Shi Dong. "A Unified Interpretable Intelligent Learning Diagnosis Framework for Learning Performance Prediction in Intelligent Tutoring Systems." International Journal of Intelligent Systems 2023 (February 20, 2023): 1–20. http://dx.doi.org/10.1155/2023/4468025.

Full text
Abstract:
Intelligent learning diagnosis is a critical engine of intelligent tutoring systems, which aims to estimate learners’ current knowledge mastery status and predict their future learning performance. The significant challenge with traditional learning diagnosis methods is the inability to balance diagnostic accuracy and interpretability. Although the existing psychometric-based learning diagnosis methods provide some domain interpretation through cognitive parameters, they have insufficient modeling capability with a shallow structure for large-scale learning data. While the deep learning-based learning diagnosis methods have improved the accuracy of learning performance prediction, their inherent black-box properties lead to a lack of interpretability, making their results untrustworthy for educational applications. To settle the abovementioned problem, the proposed unified interpretable intelligent learning diagnosis framework, which benefits from the powerful representation learning ability of deep learning and the interpretability of psychometrics, achieves a better performance of learning prediction and provides interpretability from three aspects: cognitive parameters, learner-resource response network, and weights of self-attention mechanism. Within the proposed framework, this paper presents a two-channel learning diagnosis mechanism LDM-ID as well as a three-channel learning diagnosis mechanism LDM-HMI. Experiments on two real-world datasets and a simulation dataset show that our method has higher accuracy in predicting learners’ performances compared with the state-of-the-art models and can provide valuable educational interpretability for applications such as precise learning resource recommendation and personalized learning tutoring in intelligent tutoring systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Muhammad Iqbal Zamzami and Dailatus Syamsiyah. "The Material Analysis and Learning Method of Nahwu in the Book of Qawa'id Al-Asasiyyah Li Al-Lughah Al-'Arabiyyah." al Mahāra: Jurnal Pendidikan Bahasa Arab 6, no. 2 (December 28, 2020): 257–78. http://dx.doi.org/10.14421/almahara.2020.062.06.

Full text
Abstract:
The learning materials are knowledge, behavior, and competence that students must learn to achieve the established standards of competence. A learning method is used as lesson materials presented to students in the class, both individually and in groups. These are essential learning components to discuss, for the right learning methods will make it easier for students to receive the given learning materials. It aims to know the material's content and the methods of learning Nahwu book Qawa 'id Al-Asasiyyah Li Al-Lughah Al-Arabiyyah. The study is a library study (library research), a descriptive-analytical nature, Focusing on the aspects of selection, graduations, presentations, and repetitions in the material presented. The study's result is that the book Qawa 'id Al-Asasiyyah Li Al-Lughah Al-Arabiyyah uses deductive methods in its Nahwu learning methods. From the selection aspect, the book's vocabulary has a purpose to Apply Nahwu to Arabic verse, verse The Quran, and specific themed readings. From the gradations' aspect, in the gradations aspect typology of straight gradations, only a few subchapters use varying shades. As for the aspect of the presentation, the learning representation aspects more on an I 'rab analysis on a fair reading text of a manuscript, a magazine newspaper, a Qur 'an, and so on. Then on the aspect of the rehearsal, using a matter of evaluation tools. There is a supplement of I 'rab to reinforce the subject of qawa'id. Keywords:The learning materials, Nahwu learning method, Book Qawa 'id Al-Asasiyyah Li Al-LughahAl-Arabiyyah. Abstrak Materi pembelajaran merupakan pengetahuan, perilaku, dan kemahiran yang harus dipelajari oleh siswa untuk mencapai standar kompetensi yang ditetapkan. Metode pembelajaran dimanfaatkan guru sebagai alat penyajian materi pelajaran kepada siswa di dalam kelas baik secara individu maupun kelompok. Dua hal tersebut merupakan komponen pembelajaran yang penting untuk dibahas, karena metode pembelajaran yang tepat akan memudahkan siswa dalam menerima materi pembelajaran yang diberikan. Penelitian ini bertujuan untuk mengetahui isi materi dan metode pembelajaran Nahwu kitab Qawa’id al-Asasiyyah li al-Lughah al-‘Arabiyyah. Jenis penelitian adalah kajian pustaka (library research) yang bersifat deskriptif – analitis, dengan fokus kajian pada aspek seleksi, gradasi, presentasi dan repetisi dalam penyajian materi. Hasil dari penelitian ini adalah bahwa kitab Qawa’id al-Asasiyyah li al-Lughah al-‘Arabiyyah menggunakan metode deduktif (qiyasi) dalam metode pembelajaran Nahwunya. Dari aspek seleksi, kosakata kitab memiliki tujuan untuk mengaplikasikan ilmu Nahwu pada syair Arab, ayat Al-Qur’an dan bacaan bertema tertentu. Dari aspek gradasi, secara umum kitab ini menggunakan tipologi gradasi lurus, hanya pada beberapa sub bab menggunakan gradasi putar. Sedangkan dari aspek presentasi, strategi pembelajaran yang digunakan pada kitab lebih berfokus pada analisis i’rab pada sebuah teks bacaan baik berupa naskah, koran majalah, Al-Qur’an, dan lain lain. Kemudian pada aspek repetisi, menggunakan alat evaluasi yang berupa soal-soal dan latihan. Selain berupa soal dan latihan, ada suplemen berupa i’rab untuk menguatkan materi tentang qawa’id. Kata kunci: Materi, Metode Pembelajaran Nahwu, Kitab Qawa’id al-Asasiyyah li al-Lughah al-‘Arabiyyah.
APA, Harvard, Vancouver, ISO, and other styles
10

Teng, Jackson Horlick, Thian Song Ong, Tee Connie, Kalaiarasi Sonai Muthu Anbananthen, and Pa Pa Min. "Optimized Score Level Fusion for Multi-Instance Finger Vein Recognition." Algorithms 15, no. 5 (May 11, 2022): 161. http://dx.doi.org/10.3390/a15050161.

Full text
Abstract:
The finger vein recognition system uses blood vessels inside the finger of an individual for identity verification. The public is in favor of a finger vein recognition system over conventional passwords or ID cards as the biometric technology is harder to forge, misplace, and share. In this study, the histogram of oriented gradients (HOG) features, which are robust against changes in illumination and position, are extracted from the finger vein for personal recognition. To further increase the amount of information that can be used for recognition, different instances of the finger vein, ranging from the index, middle, and ring finger are combined to form a multi-instance finger vein representation. This fusion approach is preferred since it can be performed without requiring additional sensors or feature extractors. To combine different instances of finger vein effectively, score level fusion is adopted to allow greater compatibility among the wide range of matches. Towards this end, two methods are proposed: Bayesian optimized support vector machine (SVM) score fusion (BSSF) and Bayesian optimized SVM based fusion (BSBF). The fusion results are incrementally improved by optimizing the hyperparameters of the HOG feature, SVM matcher, and the weighted sum of score level fusion using the Bayesian optimization approach. This is considered a kind of knowledge-based approach that takes into account the previous optimization attempts or trials to determine the next optimization trial, making it an efficient optimizer. By using stratified cross-validation in the training process, the proposed method is able to achieve the lowest EER of 0.48% and 0.22% for the SDUMLA-HMT dataset and UTFVP dataset, respectively.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "ID. Knowledge representation"

1

Khor, Sebastian Wankun. "A fuzzy knowledge map framework for knowledge representation." Thesis, Khor, Sebastian Wankun (2007) A fuzzy knowledge map framework for knowledge representation. PhD thesis, Murdoch University, 2007. https://researchrepository.murdoch.edu.au/id/eprint/129/.

Full text
Abstract:
Cognitive Maps (CMs) have shown promise as tools for modelling and simulation of knowledge in computers as representation of real objects, concepts, perceptions or events and their relations. This thesis examines the application of fuzzy theory to the expression of these relations, and investigates the development of a framework to better manage the operations of these relations. The Fuzzy Cognitive Map (FCM) was introduced in 1986 but little progress has been made since. This is because of the difficulty of modifying or extending its reasoning mechanism from causality to relations other than causality, such as associative and deductive reasoning. The ability to express the complex relations between objects and concepts determines the usefulness of the maps. Structuring these concepts and relations in a model so that they can be consistently represented and quickly accessed and anipulated by a computer is the goal of knowledge representation. This forms the main motivation of this research. In this thesis, a novel framework is proposed whereby single-antecedent fuzzy rules can be applied to a directed graph, and reasoning ability is extended to include noncausality. The framework provides a hierarchical structure where a graph in a higher layer represents knowledge at a high level of abstraction, and graphs in a lower layer represent the knowledge in more detail. The framework allows a modular design of knowledge representation and facilitates the creation of a more complex structure for modelling and reasoning. The experiments conducted in this thesis show that the proposed framework is effective and useful for deriving inferences from input data, solving certain classification problems, and for prediction and decision-making.
APA, Harvard, Vancouver, ISO, and other styles
2

Grau, Ron. "The acquisition and representation of knowledge about complex multi-dynamic processes." Thesis, University of Sussex, 2009. http://sro.sussex.ac.uk/id/eprint/15370/.

Full text
Abstract:
This thesis is concerned with the acquisition, representation, modelling and discovery of knowledge in ill-structured domains. In the context of this work, these are referred to as domains that involve "complex multi-dynamic (CMD) processes". A CMD process is an abstract concept for thinking about combinations of different processes where any specification and explanation involves large amounts of heterogeneous knowledge. Due to manifold cognitive and representational problems, this particular knowledge is currently hard to acquire from experts and difficult to integrate in process models. The thesis focuses on two problems in the context of modelling, discovery and design of CMD processes, a knowledge representation problem and a knowledge acquisition problem. The thesis outlines a solution by drawing together different theoretical and technological developments related to the fields of Artificial Intelligence, Cognitive Science and Computer Science, including research on computational models of scientific discovery, process modelling, and representation design. An integrative framework of knowledge representations and acquisition methods has been established, underpinning a general paradigm of CMD processes. The framework takes a compositional, collaborative approach to knowledge acquisition by providing methods for the decomposition of complex process combinations into systems of process fragments and the localisation of structural change, process behaviour and function within these systems. Diagrammatic representations play an important role, as they provide a range of representational, cognitive and computational properties that are particularly useful for meeting many of the difficulties that CMD processes pose. The research has been applied to Industrial Bakery Product Manufacturing, a challenging domain that involves a variety of physical, chemical and biochemical process combinations. A software prototype (CMD SUITE) has been implemented that integrates the developed theoretical framework to create novel, interactive knowledge-based tools which are aimed towards ill-structured domains of knowledge. The utility of the software workbench and its underlying CMD Framework has been demonstrated in a case study. The bakery experts collaborating in this project were able to successfully utilise the software tools to express and integrate their knowledge in a new way, while overcoming limits of previously used models and tools.
APA, Harvard, Vancouver, ISO, and other styles
3

Matikainen, Tiina Johanna. "Semantic Representation of L2 Lexicon in Japanese University Students." Diss., Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/133319.

Full text
Abstract:
CITE/Language Arts
Ed.D.
In a series of studies using semantic relatedness judgment response times, Jiang (2000, 2002, 2004a) has claimed that L2 lexical entries fossilize with their equivalent L1 content or something very close to it. In another study using a more productive test of lexical knowledge (Jiang 2004b), however, the evidence for this conclusion was less clear. The present study is a partial replication of Jiang (2004b) with Japanese learners of English. The aims of the study are to investigate the influence of the first language (L1) on second language (L2) lexical knowledge, to investigate whether lexical knowledge displays frequency-related, emergent properties, and to investigate the influence of the L1 on the acquisition of L2 word pairs that have a common L1 equivalent. Data from a sentence completion task was completed by 244 participants, who were shown sentence contexts in which they chose between L2 word pairs sharing a common equivalent in the students' first language, Japanese. The data were analyzed using the statistical analyses available in the programming environment R to quantify the participants' ability to discriminate between synonymous and non-synonymous use of these L2 word pairs. The results showed a strong bias against synonymy for all word pairs; the participants tended to make a distinction between the two synonymous items by assigning each word a distinct meaning. With the non-synonymous items, lemma frequency was closely related to the participants' success in choosing the correct word in the word pair. In addition, lemma frequency and the degree of similarity between the words in the word pair were closely related to the participants' overall knowledge of the non-synonymous meanings of the vocabulary items. The results suggest that the participants had a stronger preference for non-synonymous options than for the synonymous option. This suggests that the learners might have adopted a one-word, one-meaning learning strategy (Willis, 1998). The reasonably strong relationship between several of the usage-based statistics and the item measures from R suggest that with exposure learners are better able to use words in ways that are similar to native speakers of English, to differentiate between appropriate and inappropriate contexts and to recognize the boundary separating semantic overlap and semantic uniqueness. Lexical similarity appears to play a secondary role, in combination with frequency, in learners' ability to differentiate between appropriate and inappropriate contexts when using L2 word pairs that have a single translation in the L1.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
4

Glinos, Demetrios. "SYNTAX-BASED CONCEPT EXTRACTION FOR QUESTION ANSWERING." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3565.

Full text
Abstract:
Question answering (QA) stands squarely along the path from document retrieval to text understanding. As an area of research interest, it serves as a proving ground where strategies for document processing, knowledge representation, question analysis, and answer extraction may be evaluated in real world information extraction contexts. The task is to go beyond the representation of text documents as "bags of words" or data blobs that can be scanned for keyword combinations and word collocations in the manner of internet search engines. Instead, the goal is to recognize and extract the semantic content of the text, and to organize it in a manner that supports reasoning about the concepts represented. The issue presented is how to obtain and query such a structure without either a predefined set of concepts or a predefined set of relationships among concepts. This research investigates a means for acquiring from text documents both the underlying concepts and their interrelationships. Specifically, a syntax-based formalism for representing atomic propositions that are extracted from text documents is presented, together with a method for constructing a network of concept nodes for indexing such logical forms based on the discourse entities they contain. It is shown that meaningful questions can be decomposed into Boolean combinations of question patterns using the same formalism, with free variables representing the desired answers. It is further shown that this formalism can be used for robust question answering using the concept network and WordNet synonym, hypernym, hyponym, and antonym relationships. This formalism was implemented in the Semantic Extractor (SEMEX) research tool and was tested against the factoid questions from the 2005 Text Retrieval Conference (TREC), which operated upon the AQUAINT corpus of newswire documents. After adjusting for the limitations of the tool and the document set, correct answers were found for approximately fifty percent of the questions analyzed, which compares favorably with other question answering systems.
Ph.D.
School of Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
5

Rudolph, Sebastian. "Relational Exploration: Combining Description Logics and Formal Concept Analysis for Knowledge Specification." Doctoral thesis, Technische Universität Dresden, 2006. https://tud.qucosa.de/id/qucosa%3A25002.

Full text
Abstract:
Facing the growing amount of information in today's society, the task of specifying human knowledge in a way that can be unambiguously processed by computers becomes more and more important. Two acknowledged fields in this evolving scientific area of Knowledge Representation are Description Logics (DL) and Formal Concept Analysis (FCA). While DL concentrates on characterizing domains via logical statements and inferring knowledge from these characterizations, FCA builds conceptual hierarchies on the basis of present data. This work introduces Relational Exploration, a method for acquiring complete relational knowledge about a domain of interest by successively consulting a domain expert without ever asking redundant questions. This is achieved by combining DL and FCA: DL formalisms are used for defining FCA attributes while FCA exploration techniques are deployed to obtain or refine DL knowledge specifications.
APA, Harvard, Vancouver, ISO, and other styles
6

Turhan, Anni-Yasmin. "On the Computation of Common Subsumers in Description Logics." Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A23919.

Full text
Abstract:
Description logics (DL) knowledge bases are often build by users with expertise in the application domain, but little expertise in logic. To support this kind of users when building their knowledge bases a number of extension methods have been proposed to provide the user with concept descriptions as a starting point for new concept definitions. The inference service central to several of these approaches is the computation of (least) common subsumers of concept descriptions. In case disjunction of concepts can be expressed in the DL under consideration, the least common subsumer (lcs) is just the disjunction of the input concepts. Such a trivial lcs is of little use as a starting point for a new concept definition to be edited by the user. To address this problem we propose two approaches to obtain "meaningful" common subsumers in the presence of disjunction tailored to two different methods to extend DL knowledge bases. More precisely, we devise computation methods for the approximation-based approach and the customization of DL knowledge bases, extend these methods to DLs with number restrictions and discuss their efficient implementation.
APA, Harvard, Vancouver, ISO, and other styles
7

Münnich, Stefan. "Ontologien als semantische Zündstufe für die digitale Musikwissenschaft?" De Gruyter, Berlin / Boston, 2018. https://slub.qucosa.de/id/qucosa%3A36849.

Full text
Abstract:
Ontologien spielen eine zentrale Rolle für die formalisierte Repräsentation von Wissen und Informationen sowie für die Infrastruktur des sogenannten semantic web. Trotz früherer Initiativen der Bibliotheken und Gedächtnisinstitutionen hat sich die deutschsprachige Musikwissenschaft insgesamt nur sehr zögerlich dem Thema genähert. Im Rahmen einer Bestandsaufnahme werden neben der Erläuterung grundlegender Konzepte, Herausforderungen und Herangehensweisen bei der Modellierung von Ontologien daher auch vielversprechende Modelle und bereits erprobte Anwendungsbeispiele für eine ‚semantische‘ digitale Musikwissenschaft identifiziert.
Ontologies play a crucial role for the formalised representation of knowledge and information as well as for the infrastructure of the semantic web. Despite early initiatives that were driven by libraries and memory institutions, German musicology as a whole has turned very slowly to the subject. In an overview the author addresses basic concepts, challenges, and approaches for ontology design and identifies models and use cases with promising applications for a ‚semantic‘ digital musicology.
APA, Harvard, Vancouver, ISO, and other styles
8

Baader, Franz, and Adrian Nuradiansyah. "Mixing Description Logics in Privacy-Preserving Ontology Publishing." Springer, 2019. https://tud.qucosa.de/id/qucosa%3A75565.

Full text
Abstract:
In previous work, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an EL instance store, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL EL. We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy, and have shown how, in the context mentioned above, optimal compliant (safe) generalizations of a given EL concept can be computed. In the present paper, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular, we investigate the situations where the attacker’s knowledge is given by an FL0 or an FLE concept. In both cases, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of FL0, it turns out to be actually lower (polynomial) for the more expressive DL FLE.
APA, Harvard, Vancouver, ISO, and other styles
9

Hladik, Jan. "To and Fro Between Tableaus and Automata for Description Logics." Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A24073.

Full text
Abstract:
Beschreibungslogiken (Description logics, DLs) sind eine Klasse von Wissensrepraesentationsformalismen mit wohldefinierter, logik-basierter Semantik und entscheidbaren Schlussfolgerungsproblemen, wie z.B. dem Erfuellbarkeitsproblem. Zwei wichtige Entscheidungsverfahren fuer das Erfuellbarkeitsproblem von DL-Ausdruecken sind Tableau- und Automaten-basierte Algorithmen. Diese haben aufgrund ihrer unterschiedlichen Arbeitsweise komplementaere Eigenschaften: Tableau-Algorithmen eignen sich fuer Implementierungen und fuer den Nachweis von PSPACE- und NEXPTIME-Resultaten, waehrend Automaten sich besonders fuer EXPTIME-Resultate anbieten. Zudem ermoeglichen sie eine vom Standpunkt der Theorie aus elegantere Handhabung von unendlichen Strukturen, eignen sich aber wesentlich schlechter fuer eine Implementierung. Ziel der Dissertation ist es, die Gruende fuer diese Unterschiede zu analysieren und Moeglichkeiten aufzuzeigen, wie Eigenschaften von einem Ansatz auf den anderen uebertragen werden koennen, um so die positiven Eigenschaften von beiden Ansaetzen miteinander zu verbinden. Unter Anderem werden Methoden entwickelt, mit Hilfe von Automaten PSPACE-Resultate zu zeigen, und von einem Tableau-Algorithmus automatisch ein EXPTIME-Resultat abzuleiten.
Description Logics (DLs) are a family of knowledge representation languages with well-defined logic-based semantics and decidable inference problems, e.g. satisfiability. Two of the most widely used decision procedures for the satisfiability problem are tableau- and automata-based algorithms. Due to their different operation, these two classes have complementary properties: tableau algorithms are well-suited for implementation and for showing PSPACE and NEXPTIME complexity results, whereas automata algorithms are particularly useful for showing EXPTIME results. Additionally, they allow for an elegant handling of infinite structures, but they are not suited for implementation. The aim of this thesis is to analyse the reasons for these differences and to find ways of transferring properties between the two approaches in order to reconcile the positive properties of both. For this purpose, we develop methods that enable us to show PSPACE results with the help of automata and to automatically derive an EXPTIME result from a tableau algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Steffen, Johann. "VIKA - Konzeptstudien eines virtuellen Konstruktionsberaters für additiv zu fertigende Flugzeugstrukturbauteile." Thelem Universitätsverlag & Buchhandlung GmbH & Co. KG, 2021. https://tud.qucosa.de/id/qucosa%3A75869.

Full text
Abstract:
Gegenstand der Arbeit ist die konzeptionelle Ausarbeitung einer virtuellen Anwendung, die es den Anwendern in der Flugzeugstrukturkonstruktion im Kontext der additiven Fertigung ermöglicht, interaktiv und intuitiv wichtige Entscheidungen für den Bauteilentstehungsprozess zu treffen. Dabei soll sich die Anwendung adaptiv je nach Anwendungsfall in der Informationsbereitstellung an die jeweils benötigten Anforderungen und Bedürfnisse des Anwenders anpassen können.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "ID. Knowledge representation"

1

Nodenot, Thierry, Pierre Laforcade, and Xavier Le Pallec. "Visual Design of coherent Technology-Enhanced Learning Systems." In Handbook of Visual Languages for Instructional Design, 252–79. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-729-4.ch013.

Full text
Abstract:
Visual instructional design languages currently provide notations for representing the intermediate and final results of a knowledge engineering process. As some languages particularly focus on the formal representation of a learning design that can be transformed into machine interpretable code (i.e., IML-LD players), others have been developed to support the creativity of designers while exploring their problem-spaces and solutions. This chapter introduces CPM (Computer Problem-based Metamodel), a visual language for the instructional design of Problem-Based Learning (PBL) situations. On the one hand, CPM sketches of a PBL situation can improve communication within multidisciplinary ID teams; on the other hand, CPM blueprints can describe the functional components that a Technology-Enhanced Learning (TEL) system should offer to support such a PBL situation. We first present the aims and the fundamentals of CPM language. Then, we analyze CPM usability using a set of CPM diagrams produced in a case study in a ‘real-world’ setting
APA, Harvard, Vancouver, ISO, and other styles
2

Vinayakumar, R., K. P. Soman, and Prabaharan Poornachandran. "Evaluation of Recurrent Neural Network and its Variants for Intrusion Detection System (IDS)." In Deep Learning and Neural Networks, 295–316. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0414-7.ch018.

Full text
Abstract:
This article describes how sequential data modeling is a relevant task in Cybersecurity. Sequences are attributed temporal characteristics either explicitly or implicitly. Recurrent neural networks (RNNs) are a subset of artificial neural networks (ANNs) which have appeared as a powerful, principle approach to learn dynamic temporal behaviors in an arbitrary length of large-scale sequence data. Furthermore, stacked recurrent neural networks (S-RNNs) have the potential to learn complex temporal behaviors quickly, including sparse representations. To leverage this, the authors model network traffic as a time series, particularly transmission control protocol / internet protocol (TCP/IP) packets in a predefined time range with a supervised learning method, using millions of known good and bad network connections. To find out the best architecture, the authors complete a comprehensive review of various RNN architectures with its network parameters and network structures. Ideally, as a test bed, they use the existing benchmark Defense Advanced Research Projects Agency / Knowledge Discovery and Data Mining (DARPA) / (KDD) Cup ‘99' intrusion detection (ID) contest data set to show the efficacy of these various RNN architectures. All the experiments of deep learning architectures are run up to 1000 epochs with a learning rate in the range [0.01-0.5] on a GPU-enabled TensorFlow and experiments of traditional machine learning algorithms are done using Scikit-learn. Experiments of families of RNN architecture achieved a low false positive rate in comparison to the traditional machine learning classifiers. The primary reason is that RNN architectures are able to store information for long-term dependencies over time-lags and to adjust with successive connection sequence information. In addition, the effectiveness of RNN architectures are shown for the UNSW-NB15 data set.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "ID. Knowledge representation"

1

Southward, Steve C. "Real-Time Parameter ID Using Polynomial Chaos Expansions." In ASME 2007 International Mechanical Engineering Congress and Exposition. ASMEDC, 2007. http://dx.doi.org/10.1115/imece2007-43745.

Full text
Abstract:
A novel real-time parameter identification algorithm has been developed that exploits polynomial chaos expansion (PCE) representations of uncertain parameters. Dynamic system models inevitably contain parameters whose values are rarely known with absolute certainty. In many cases, such parameters are either not measurable, or they are slowly time varying. In some cases, the dynamic system model is inadequate and parameter values are simply chosen to provide a “best fit” representation. For the method proposed here, we assume apriori knowledge of the probability distributions associated with the uncertain parameters. Within the PCE framework, the uncertain parameter distribution is explicitly propagated through the dynamic system equations using a Galerkin projection onto an orthogonal polynomial basis. The probabilistic PCE model is then collapsed to a deterministic model where an adaptive algorithm is designed to effectively reduce the uncertainty. For illustration, this algorithm is numerically demonstrated using a simple first order dynamic system with only a single uncertain parameter.
APA, Harvard, Vancouver, ISO, and other styles
2

Perfetto-Demarchi, Ana Paula, Cleuza Fornasier, Bernabé Hernandis Ortuño, and Elingth Simoné Rosales Marquina. "O uso do dispositivo ID-Think no compartilhamento de conhecimento." In Systems & Design: Beyond Processes and Thinking. Valencia: Universitat Politècnica València, 2016. http://dx.doi.org/10.4995/ifdp.2016.2400.

Full text
Abstract:
Considering that the great advantage of an organization today is the knowledge it has, and how it manages this knowledge, this article reports the application of the IDThink device in a fashion organization's manufacturing sector for its validation. This device applies knowledge management through the skills and attitudes of the design thinker. The device shown here is to assist the process of innovation in organizations by using some design thinkers skills in the knowledge explicitation and externalization. To Brown (2009) design thinking begins with the skills that designers have learned over time as: To align the human being´s needs with the technological resources available in the organization; Intuition; The ability to recognize patterns; Build ideas that have both emotional significance and functional; The ability to question their surroundings and be empathetic and; The ability to express otherwise than in words or symbols. This last is one of the most important designer skills. The designer uses the drawing process also as a critical process, as discovery. He uses drawing as a means of materializing, imagination, or discovery of something that he cannot built in his mind, and as a mean of communication with others, facilitating collaboration on projects. The IDThink device is an external, temporary repository for ideas, with which the designer interacts, and this externalization supports the necessary dialogue that it has between the problem and the solution, which minimizes the cognitive stress when dealing with quantities and complexities of knowledge to be process internally. The identification of concepts and their positioned graphical representation facilitates decision-making, the sharing of knowledge of everyone involved in the organization management, and observation of systemic functioning of the company, focusing on indicators that it judged suitable. The use of visual codes, which will be available throughout the process, allows the team to navigate the process without losing their train of thought. Also allows us to observe the evolution of the environment and its influence in the organization to assist in corrective actions. The nature of the research was exploratory, with lineation by ex-post-fact, using a strategy of ethnography, through non-participant interviews and observation. After applying, the researchers understood the need to adapt the External System of the IDThink device so that it includes an amount of knowledge needed to the visualization of the organization's management and / or the development of new products.DOI: http://dx.doi.org/10.4995/IFDP.2016.2400
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography