Добірка наукової літератури з теми "Semantic concepts extraction"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Semantic concepts extraction".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Semantic concepts extraction":

1

Huang, Jingxiu, Ruofei Ding, Xiaomin Wu, Shumin Chen, Jiale Zhang, Lixiang Liu, and Yunxiang Zheng. "WERECE: An Unsupervised Method for Educational Concept Extraction Based on Word Embedding Refinement." Applied Sciences 13, no. 22 (November 14, 2023): 12307. http://dx.doi.org/10.3390/app132212307.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The era of educational big data has sparked growing interest in extracting and organizing educational concepts from massive amounts of information. Outcomes are of the utmost importance for artificial intelligence–empowered teaching and learning. Unsupervised educational concept extraction methods based on pre-trained models continue to proliferate due to ongoing advances in semantic representation. However, it remains challenging to directly apply pre-trained large language models to extract educational concepts; pre-trained models are built on extensive corpora and do not necessarily cover all subject-specific concepts. To address this gap, we propose a novel unsupervised method for educational concept extraction based on word embedding refinement (i.e., word embedding refinement–based educational concept extraction (WERECE)). It integrates a manifold learning algorithm to adapt a pre-trained model for extracting educational concepts while accounting for the geometric information in semantic computation. We further devise a discriminant function based on semantic clustering and Box–Cox transformation to enhance WERECE’s accuracy and reliability. We evaluate its performance on two newly constructed datasets, EDU-DT and EDUTECH-DT. Experimental results show that WERECE achieves an average precision up to 85.9%, recall up to 87.0%, and F1 scores up to 86.4%, which significantly outperforms baselines (TextRank, term frequency–inverse document frequency, isolation forest, K-means, and one-class support vector machine) on educational concept extraction. Notably, when WERECE is implemented with different parameter settings, its precision and recall sensitivity remain robust. WERECE also holds broad application prospects as a foundational technology, such as for building discipline-oriented knowledge graphs, enhancing learning assessment and feedback, predicting learning interests, and recommending learning resources.
2

Li, Dao Wang. "Research on Text Conceptual Relation Extraction Based on Domain Ontology." Advanced Materials Research 739 (August 2013): 574–79. http://dx.doi.org/10.4028/www.scientific.net/amr.739.574.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
At present, the ontology learning research focuses on the concept and relation extraction; the traditional extraction methods ignore the influence of the semantic factors on the extraction results, and lack of the accurate extraction of the relations among concepts. According to this problem, in this paper, the association rule is combined with the semantic similarity, and the improved comprehensive semantic similarity is applied into the relation extraction through the association rule mining relation. The experiments show that the relation extraction based on this method effectively improves the precision of the extraction results.
3

Katsadaki, Eirini, and Margarita Kokla. "Comparative Evaluation of Keyphrase Extraction Tools for Semantic Analysis of Climate Change Scientific Reports and Ontology Enrichment." AGILE: GIScience Series 5 (May 30, 2024): 1–7. http://dx.doi.org/10.5194/agile-giss-5-32-2024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. Keyphrase extraction is a process used for identifying important concepts and entities within unstructured information sources to facilitate ontology enrichment, semantic analysis, and information retrieval. In this paper, three different tools for key phrase extraction are compared to evaluate their accuracy and effectiveness for extracting geospatial and climate change concepts from climate change reports: frequency-inverse document frequency (TF-IDF), Amazon Comprehend, and YAKE. Climate change reports contain vital information for comprehending the complexity of climate change causes, impacts, and interconnections, and include wealth of information on geospatial concepts, locations, and events but the diverse terminology used complicates information extraction and organization. The highest scoring keyphrases are further used to enrich and populate the SWEET ontology with concepts and instances related to climate change and meaningful relations between them to support semantic representation and formalization of knowledge.
4

AlArfaj, Abeer. "Towards relation extraction from Arabic text: a review." International Robotics & Automation Journal 5, no. 5 (December 24, 2019): 212–15. http://dx.doi.org/10.15406/iratj.2019.05.00195.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Semantic relation extraction is an important component of ontologies that can support many applications e.g. text mining, question answering, and information extraction. However, extracting semantic relations between concepts is not trivial and one of the main challenges in Natural Language Processing (NLP) Field. The Arabic language has complex morphological, grammatical, and semantic aspects since it is a highly inflectional and derivational language, which makes task even more challenging. In this paper, we present a review of the state of the art for relation extraction from texts, addressing the progress and difficulties in this field. We discuss several aspects related to this task, considering the taxonomic and non-taxonomic relation extraction methods. Majority of relation extraction approaches implement a combination of statistical and linguistic techniques to extract semantic relations from text. We also give special attention to the state of the work on relation extraction from Arabic texts, which need further progress.
5

Ji, Lei, Yujing Wang, Botian Shi, Dawei Zhang, Zhongyuan Wang, and Jun Yan. "Microsoft Concept Graph: Mining Semantic Concepts for Short Text Understanding." Data Intelligence 1, no. 3 (June 2019): 238–70. http://dx.doi.org/10.1162/dint_a_00013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Knowlege is important for text-related applications. In this paper, we introduce Microsoft Concept Graph, a knowledge graph engine that provides concept tagging APIs to facilitate the understanding of human languages. Microsoft Concept Graph is built upon Probase, a universal probabilistic taxonomy consisting of instances and concepts mined from the Web. We start by introducing the construction of the knowledge graph through iterative semantic extraction and taxonomy construction procedures, which extract 2.7 million concepts from 1.68 billion Web pages. We then use conceptualization models to represent text in the concept space to empower text-related applications, such as topic search, query recommendation, Web table understanding and Ads relevance. Since the release in 2016, Microsoft Concept Graph has received more than 100,000 pageviews, 2 million API calls and 3,000 registered downloads from 50,000 visitors over 64 countries.
6

Papadias, Evangelos, Margarita Kokla, and Eleni Tomai. "Educing knowledge from text: semantic information extraction of spatial concepts and places." AGILE: GIScience Series 2 (June 4, 2021): 1–7. http://dx.doi.org/10.5194/agile-giss-2-38-2021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. A growing body of geospatial research has shifted the focus from fully structured to semistructured and unstructured content written in natural language. Natural language texts provide a wealth of knowledge about geospatial concepts, places, events, and activities that needs to be extracted and formalized to support semantic annotation, knowledge-based exploration, and semantic search. The paper presents a web-based prototype for the extraction of geospatial entities and concepts, and the subsequent semantic visualization and interactive exploration of the extraction results. A lightweight ontology anchored in natural language guides the interpretation of natural language texts and the extraction of relevant domain knowledge. The approach is applied on three heterogeneous sources which provide a wealth of spatial concepts and place names.
7

Hong Doan, Phuoc Thi, Ngamnij Arch-int, and Somjit Arch-int. "A Semantic Framework for Extracting Taxonomic Relations from Text Corpus." International Arab Journal of Information Technology 17, no. 3 (May 1, 2019): 325–37. http://dx.doi.org/10.34028/iajit/17/3/6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nowadays, ontologies have been exploited in many current applications due to the abilities in representing knowledge and inferring new knowledge. However, the manual construction of ontologies is tedious and time-consuming. Therefore, the automated ontology construction from text has been investigated. The extraction of taxonomic relations between concepts is a crucial step in constructing domain ontologies. To obtain taxonomic relations from a text corpus, especially when the data is deficient, the approach of using the web as a source of collective knowledge (a.k.a web-based approach) is usually applied. The important challenge of this approach is how to collect relevant knowledge from a large amount of web pages. To overcome this issue, we propose a framework that combines Word Sense Disambiguation (WSD) and web approach to extract taxonomic relations from a domain-text corpus. This framework consists of two main stages: concept extraction and taxonomic-relation extraction. Concepts acquired from the concept-extraction stage are disambiguated through WSD module and passed to stage of extraction taxonomic relations afterward. To evaluate the efficiency of the proposed framework, we conduct experiments on datasets about two domains of tourism and sport. The obtained results show that the proposed method is efficient in corpora which are insufficient or have no training data. Besides, the proposed method outperforms the state of the art method in corpora having high WSD results.
8

Chahal, Poonam, Manjeet Singh, and Suresh Kumar. "Semantic Analysis Based Approach for Relevant Text Extraction Using Ontology." International Journal of Information Retrieval Research 7, no. 4 (October 2017): 19–36. http://dx.doi.org/10.4018/ijirr.2017100102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Semantic analysis computation is done by extracting the interrelated concepts used by an author in the text/content of document. The concepts and linking i.e. relationships that are available among the concepts are most relevant as they provide the maximum information related to the event or activity as described by an author in the document. The retrieved relevant information from the text helps in the construction of the summary of a large text present in the document. This summary can further be represented in form of ontology and utilized in various application areas of information retrieval process like crawling, indexing, ranking, etc. The constructed ontologies can be compared with each other for calculation of similarity index based on semantic analysis between any texts. This paper gives a novel technique for retrieving the relevant semantic information represented in the form of ontology for true semantic analysis of given text.
9

Abbas, Asim, Muhammad Afzal, Jamil Hussain, Taqdir Ali, Hafiz Syed Muhammad Bilal, Sungyoung Lee, and Seokhee Jeon. "Clinical Concept Extraction with Lexical Semantics to Support Automatic Annotation." International Journal of Environmental Research and Public Health 18, no. 20 (October 9, 2021): 10564. http://dx.doi.org/10.3390/ijerph182010564.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Extracting clinical concepts, such as problems, diagnosis, and treatment, from unstructured clinical narrative documents enables data-driven approaches such as machine and deep learning to support advanced applications such as clinical decision-support systems, the assessment of disease progression, and the intelligent analysis of treatment efficacy. Various tools such as cTAKES, Sophia, MetaMap, and other rules-based approaches and algorithms have been used for automatic concept extraction. Recently, machine- and deep-learning approaches have been used to extract, classify, and accurately annotate terms and phrases. However, the requirement of an annotated dataset, which is labor-intensive, impedes the success of data-driven approaches. A rule-based mechanism could support the process of annotation, but existing rule-based approaches fail to adequately capture contextual, syntactic, and semantic patterns. This study intends to introduce a comprehensive rule-based system that automatically extracts clinical concepts from unstructured narratives with higher accuracy and transparency. The proposed system is a pipelined approach, capable of recognizing clinical concepts of three types, problem, treatment, and test, in the dataset collected from a published repository as a part of the I2b2 challenge 2010. The system’s performance is compared with that of three existing systems: Quick UMLS, BIO-CRF, and the Rules (i2b2) model. Compared to the baseline systems, the average F1-score of 72.94% was found to be 13% better than Quick UMLS, 3% better than BIO CRF, and 30.1% better than the Rules (i2b2) model. Individually, the system performance was noticeably higher for problem-related concepts, with an F1-score of 80.45%, followed by treatment-related concepts and test-related concepts, with F1-scores of 76.06% and 55.3%, respectively. The proposed methodology significantly improves the performance of concept extraction from unstructured clinical narratives by exploiting the linguistic and lexical semantic features. The approach can ease the automatic annotation process of clinical data, which ultimately improves the performance of supervised data-driven applications trained with these data.
10

Arnold, Patrick, and Erhard Rahm. "Automatic Extraction of Semantic Relations from Wikipedia." International Journal on Artificial Intelligence Tools 24, no. 02 (April 2015): 1540010. http://dx.doi.org/10.1142/s0218213015400102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We introduce a novel approach to extract semantic relations (e.g., is-a and part-of relations) from Wikipedia articles. These relations are used to build up a large and up-to-date thesaurus providing background knowledge for tasks such as determining semantic ontology mappings. Our automatic approach uses a comprehensive set of semantic patterns, finite state machines and NLP techniques to extract millions of relations between concepts. An evaluation for different domains shows the high quality and effectiveness of the proposed approach. We also illustrate the value of the newly found relations for improving existing ontology mappings.

Дисертації з теми "Semantic concepts extraction":

1

Liang, Antoni. "Face Image Retrieval with Landmark Detection and Semantic Concepts Extraction." Thesis, Curtin University, 2017. http://hdl.handle.net/20.500.11937/54081.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis proposes various novel approaches for improving the performances of automatic facial landmarks detection system based on the concept of pictorial tree structure model. Furthermore, a robust glasses landmark detection system is also proposed as glasses are commonly used. These proposed approaches are employed to develop an automatic semantic based face images retrieval system. The experiment results demonstrate significant improvements of all the proposed approaches towards accuracy and efficiency.
2

Tang, My Thao. "Un système interactif et itératif extraction de connaissances exploitant l'analyse formelle de concepts." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0060/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous présentons notre méthodologie de la connaissance interactive et itérative pour une extraction des textes - le système KESAM: Un outil pour l'extraction des connaissances et le Management de l’Annotation Sémantique. Le KESAM est basé sur l'analyse formelle du concept pour l'extraction des connaissances à partir de ressources textuelles qui prend en charge l'interaction aux experts. Dans le système KESAM, l’extraction des connaissances et l'annotation sémantique sont unifiées en un seul processus pour bénéficier à la fois l'extraction des connaissances et l'annotation sémantique. Les annotations sémantiques sont utilisées pour formaliser la source de la connaissance dans les textes et garder la traçabilité entre le modèle de la connaissance et la source de la connaissance. Le modèle de connaissance est, en revanche, utilisé afin d’améliorer les annotations sémantiques. Le processus KESAM a été conçu pour préserver en permanence le lien entre les ressources (textes et annotations sémantiques) et le modèle de la connaissance. Le noyau du processus est l'Analyse Formelle de Concepts (AFC) qui construit le modèle de la connaissance, i.e. le treillis de concepts, et assure le lien entre le modèle et les annotations des connaissances. Afin d'obtenir le résultat du treillis aussi près que possible aux besoins des experts de ce domaine, nous introduisons un processus itératif qui permet une interaction des experts sur le treillis. Les experts sont invités à évaluer et à affiner le réseau; ils peuvent faire des changements dans le treillis jusqu'à ce qu'ils parviennent à un accord entre le modèle et leurs propres connaissances ou le besoin de l’application. Grâce au lien entre le modèle des connaissances et des annotations sémantiques, le modèle de la connaissance et les annotations sémantiques peuvent co-évoluer afin d'améliorer leur qualité par rapport aux exigences des experts du domaine. En outre, à l'aide de l’AFC de la construction des concepts avec les définitions des ensembles des objets et des ensembles d'attributs, le système KESAM est capable de prendre en compte les deux concepts atomiques et définis, à savoir les concepts qui sont définis par un ensemble des attributs. Afin de combler l'écart possible entre le modèle de représentation basé sur un treillis de concept et le modèle de représentation d'un expert du domaine, nous présentons ensuite une méthode formelle pour l'intégration des connaissances d’expert en treillis des concepts d'une manière telle que nous pouvons maintenir la structure des concepts du treillis. La connaissance d’expert est codée comme un ensemble de dépendance de l'attribut qui est aligné avec l'ensemble des implications fournies par le concept du treillis, ce qui conduit à des modifications dans le treillis d'origine. La méthode permet également aux experts de garder une trace des changements qui se produisent dans le treillis d'origine et la version finale contrainte, et d'accéder à la façon dont les concepts dans la pratique sont liés à des concepts émis automatiquement à partir des données. Nous pouvons construire les treillis contraints sans changer les données et fournir la trace des changements en utilisant des projections extensives sur treillis. À partir d'un treillis d'origine, deux projections différentes produisent deux treillis contraints différents, et, par conséquent, l'écart entre le modèle de représentation basée sur un treillis de réflexion et le modèle de représentation d'un expert du domaine est rempli avec des projections
In this thesis, we present a methodology for interactive and iterative extracting knowledge from texts - the KESAM system: A tool for Knowledge Extraction and Semantic Annotation Management. KESAM is based on Formal Concept Analysis for extracting knowledge from textual resources that supports expert interaction. In the KESAM system, knowledge extraction and semantic annotation are unified into one single process to benefit both knowledge extraction and semantic annotation. Semantic annotations are used for formalizing the source of knowledge in texts and keeping the traceability between the knowledge model and the source of knowledge. The knowledge model is, in return, used for improving semantic annotations. The KESAM process has been designed to permanently preserve the link between the resources (texts and semantic annotations) and the knowledge model. The core of the process is Formal Concept Analysis that builds the knowledge model, i.e. the concept lattice, and ensures the link between the knowledge model and annotations. In order to get the resulting lattice as close as possible to domain experts' requirements, we introduce an iterative process that enables expert interaction on the lattice. Experts are invited to evaluate and refine the lattice; they can make changes in the lattice until they reach an agreement between the model and their own knowledge or application's need. Thanks to the link between the knowledge model and semantic annotations, the knowledge model and semantic annotations can co-evolve in order to improve their quality with respect to domain experts' requirements. Moreover, by using FCA to build concepts with definitions of sets of objects and sets of attributes, the KESAM system is able to take into account both atomic and defined concepts, i.e. concepts that are defined by a set of attributes. In order to bridge the possible gap between the representation model based on a concept lattice and the representation model of a domain expert, we then introduce a formal method for integrating expert knowledge into concept lattices in such a way that we can maintain the lattice structure. The expert knowledge is encoded as a set of attribute dependencies which is aligned with the set of implications provided by the concept lattice, leading to modifications in the original lattice. The method also allows the experts to keep a trace of changes occurring in the original lattice and the final constrained version, and to access how concepts in practice are related to concepts automatically issued from data. The method uses extensional projections to build the constrained lattices without changing the original data and provide the trace of changes. From an original lattice, two different projections produce two different constrained lattices, and thus, the gap between the representation model based on a concept lattice and the representation model of a domain expert is filled with projections
3

Tang, My Thao. "Un système interactif et itératif extraction de connaissances exploitant l'analyse formelle de concepts." Electronic Thesis or Diss., Université de Lorraine, 2016. http://www.theses.fr/2016LORR0060.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous présentons notre méthodologie de la connaissance interactive et itérative pour une extraction des textes - le système KESAM: Un outil pour l'extraction des connaissances et le Management de l’Annotation Sémantique. Le KESAM est basé sur l'analyse formelle du concept pour l'extraction des connaissances à partir de ressources textuelles qui prend en charge l'interaction aux experts. Dans le système KESAM, l’extraction des connaissances et l'annotation sémantique sont unifiées en un seul processus pour bénéficier à la fois l'extraction des connaissances et l'annotation sémantique. Les annotations sémantiques sont utilisées pour formaliser la source de la connaissance dans les textes et garder la traçabilité entre le modèle de la connaissance et la source de la connaissance. Le modèle de connaissance est, en revanche, utilisé afin d’améliorer les annotations sémantiques. Le processus KESAM a été conçu pour préserver en permanence le lien entre les ressources (textes et annotations sémantiques) et le modèle de la connaissance. Le noyau du processus est l'Analyse Formelle de Concepts (AFC) qui construit le modèle de la connaissance, i.e. le treillis de concepts, et assure le lien entre le modèle et les annotations des connaissances. Afin d'obtenir le résultat du treillis aussi près que possible aux besoins des experts de ce domaine, nous introduisons un processus itératif qui permet une interaction des experts sur le treillis. Les experts sont invités à évaluer et à affiner le réseau; ils peuvent faire des changements dans le treillis jusqu'à ce qu'ils parviennent à un accord entre le modèle et leurs propres connaissances ou le besoin de l’application. Grâce au lien entre le modèle des connaissances et des annotations sémantiques, le modèle de la connaissance et les annotations sémantiques peuvent co-évoluer afin d'améliorer leur qualité par rapport aux exigences des experts du domaine. En outre, à l'aide de l’AFC de la construction des concepts avec les définitions des ensembles des objets et des ensembles d'attributs, le système KESAM est capable de prendre en compte les deux concepts atomiques et définis, à savoir les concepts qui sont définis par un ensemble des attributs. Afin de combler l'écart possible entre le modèle de représentation basé sur un treillis de concept et le modèle de représentation d'un expert du domaine, nous présentons ensuite une méthode formelle pour l'intégration des connaissances d’expert en treillis des concepts d'une manière telle que nous pouvons maintenir la structure des concepts du treillis. La connaissance d’expert est codée comme un ensemble de dépendance de l'attribut qui est aligné avec l'ensemble des implications fournies par le concept du treillis, ce qui conduit à des modifications dans le treillis d'origine. La méthode permet également aux experts de garder une trace des changements qui se produisent dans le treillis d'origine et la version finale contrainte, et d'accéder à la façon dont les concepts dans la pratique sont liés à des concepts émis automatiquement à partir des données. Nous pouvons construire les treillis contraints sans changer les données et fournir la trace des changements en utilisant des projections extensives sur treillis. À partir d'un treillis d'origine, deux projections différentes produisent deux treillis contraints différents, et, par conséquent, l'écart entre le modèle de représentation basée sur un treillis de réflexion et le modèle de représentation d'un expert du domaine est rempli avec des projections
In this thesis, we present a methodology for interactive and iterative extracting knowledge from texts - the KESAM system: A tool for Knowledge Extraction and Semantic Annotation Management. KESAM is based on Formal Concept Analysis for extracting knowledge from textual resources that supports expert interaction. In the KESAM system, knowledge extraction and semantic annotation are unified into one single process to benefit both knowledge extraction and semantic annotation. Semantic annotations are used for formalizing the source of knowledge in texts and keeping the traceability between the knowledge model and the source of knowledge. The knowledge model is, in return, used for improving semantic annotations. The KESAM process has been designed to permanently preserve the link between the resources (texts and semantic annotations) and the knowledge model. The core of the process is Formal Concept Analysis that builds the knowledge model, i.e. the concept lattice, and ensures the link between the knowledge model and annotations. In order to get the resulting lattice as close as possible to domain experts' requirements, we introduce an iterative process that enables expert interaction on the lattice. Experts are invited to evaluate and refine the lattice; they can make changes in the lattice until they reach an agreement between the model and their own knowledge or application's need. Thanks to the link between the knowledge model and semantic annotations, the knowledge model and semantic annotations can co-evolve in order to improve their quality with respect to domain experts' requirements. Moreover, by using FCA to build concepts with definitions of sets of objects and sets of attributes, the KESAM system is able to take into account both atomic and defined concepts, i.e. concepts that are defined by a set of attributes. In order to bridge the possible gap between the representation model based on a concept lattice and the representation model of a domain expert, we then introduce a formal method for integrating expert knowledge into concept lattices in such a way that we can maintain the lattice structure. The expert knowledge is encoded as a set of attribute dependencies which is aligned with the set of implications provided by the concept lattice, leading to modifications in the original lattice. The method also allows the experts to keep a trace of changes occurring in the original lattice and the final constrained version, and to access how concepts in practice are related to concepts automatically issued from data. The method uses extensional projections to build the constrained lattices without changing the original data and provide the trace of changes. From an original lattice, two different projections produce two different constrained lattices, and thus, the gap between the representation model based on a concept lattice and the representation model of a domain expert is filled with projections
4

Joseph, Daniel. "Linking information resources with automatic semantic extraction." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/linking-information-resources-with-automatic-semantic-extraction(ada2db36-4366-441a-a0a9-d76324a77e2c).html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Knowledge is a critical dimension in the problem solving processes of human intelligence. Consequently, enabling intelligent systems to provide advanced services requires that their artificial intelligence routines have access to knowledge of relevant domains. Ontologies are often utilised as the formal conceptualisation of domains, in that they identify and model the concepts and relationships of the targeted domain. However complexities inherent in ontology development and maintenance have limited their availability. Separate from the conceptualisation component, domain knowledge also encompasses the concept membership of object instances within the domain. The need to capture both the domain model and the current state of instances within the domain has motivated the import of Formal Concept Analysis into intelligent systems research. Formal Concept Analysis, which provides a simplified model of a domain, has the advantage in that not only does it define concepts in terms of their attribute description but object instances are simultaneously ascribed to their appropriate concepts. Nonetheless, a significant drawback of Formal Concept Analysis is that when applied to a large dataset, the lattice with which it models a domain is often composed of a copious amount of concepts, many of which are arguably unnecessary or invalid. In this research a novel measure is introduced which assigns a relevance value to concepts in the lattice. This measure is termed the Collapse Index and is based on the minimum number of object instances that need be removed from a domain in order for a concept to be expunged from the lattice. Mathematics that underpin its origin and behaviour are detailed in the thesis showing that if the relevance of a concept is defined by the Collapse Index: a concept will eventually lose relevance if one of its immediate subconcepts increasingly acquires object instance support; and a concept has its highest relevance when its immediate subconcepts have equal or near equal object instance support. In addition, experimental evaluation is provided where the Collapse Index demonstrated comparable or better performance than the current prominent alternatives in: being consistent across samples; the ability to recall concepts in noisy lattices; and efficiency of calculation. It is also demonstrated that the Collapse Index affords concepts with low object instance support the opportunity to have a higher relevance than those of high supportThe second contribution to knowledge is that of an approach to semantic extraction from a dataset where the Collapse Index is included as a method of selecting concepts for inclusion in a final concept hierarchy. The utility of the approach is demonstrated by reviewing its inclusion in the implementation of a recommender system. This recommender system serves as the final contribution featuring a unique design where lattices represent user profiles and concepts in these profiles are pruned using the Collapse Index. Results showed that pruning of profile lattices enabled by the Collapse Index improved the success levels of movie recommendations if the appropriate thresholds are set.
5

De, Maio Carmen. "Fuzzy concept analysis for semantic knowledge extraction." Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/1307.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
2010 - 2011
Availability of controlled vocabularies, ontologies, and so on is enabling feature to provide some added values in terms of knowledge management. Nevertheless, the design, maintenance and construction of domain ontologies are a human intensive and time consuming task. The Knowledge Extraction consists of automatic techniques aimed to identify and to define relevant concepts and relations of the domain of interest by analyzing structured (relational databases, XML) and unstructured (text, documents, images) sources. Specifically, methodology for knowledge extraction defined in this research work is aimed at enabling automatic ontology/taxonomy construction from existing resources in order to obtain useful information. For instance, the experimental results take into account data produced with Web 2.0 tools (e.g., RSS-Feed, Enterprise Wiki, Corporate Blog, etc.), text documents, and so on. Final results of Knowledge Extraction methodology are taxonomies or ontologies represented in a machine oriented manner by means of semantic web technologies, such as: RDFS, OWL and SKOS. The resulting knowledge models have been applied to different goals. On the one hand, the methodology has been applied in order to extract ontologies and taxonomies and to semantically annotate text. On the other hand, the resulting ontologies and taxonomies are exploited in order to enhance information retrieval performance and to categorize incoming data and to provide an easy way to find interesting resources (such as faceted browsing). Specifically, following objectives have been addressed in this research work:  Ontology/Taxonomy Extraction: that concerns to automatic extraction of hierarchical conceptualizations (i.e., taxonomies) and relations expressed by means typical description logic constructs (i.e., ontologies).  Information Retrieval: definition of a technique to perform concept-based the retrieval of information according to the user queries.  Faceted Browsing: in order to automatically provide faceted browsing capabilities according to the categorization of the extracted contents.  Semantic Annotation: definition of a text analysis process, aimed to automatically annotate subjects and predicates identified. The experimental results have been obtained in some application domains: e-learning, enterprise human resource management, clinical decision support system. Future challenges go in the following directions: investigate approaches to support ontology alignment and merging applied to knowledge management.
X n.s.
6

Caubriere, Antoine. "Du signal au concept : réseaux de neurones profonds appliqués à la compréhension de la parole." Thesis, Le Mans, 2021. https://tel.archives-ouvertes.fr/tel-03177996.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse s’inscrit dans le cadre de l’apprentissage profond appliqué à la compréhension de la parole. Jusqu'à présent, cette tâche était réalisée par l’intermédiaire d’une chaîne de composants mettant en oeuvre, par exemple, un système de reconnaissance de la parole, puis différents traitements du langage naturel, avant d’impliquer un système de compréhension du langage sur les transcriptions automatiques enrichies. Récemment, des travaux dans le domaine de la reconnaissance de la parole ont montré qu’il était possible de produire une séquence de mots directement à partir du signal acoustique. Dans le cadre de cette thèse, il est question d’exploiter ces avancées et de les étendre pour concevoir un système composé d’un seul modèle neuronal entièrement optimisé pour la tâche de compréhension de la parole, du signal au concept. Tout d’abord, nous présentons un état de l’art décrivant les principes de l’apprentissage neuronal profond, de la reconnaissance de la parole, et de la compréhension de la parole. Nous décrivons ensuite les contributions réalisées selon trois axes principaux. Nous proposons un premier système répondant à la problématique posée et l’appliquons à une tâche de reconnaissance des entités nommées. Puis, nous proposons une stratégie de transfert d’apprentissage guidée par une approche de type curriculum learning. Cette stratégie s’appuie sur les connaissances génériques apprises afin d’améliorer les performances d’un système neuronal sur une tâche d’extraction de concepts sémantiques. Ensuite, nous effectuons une analyse des erreurs produites par notre approche, tout en étudiant le fonctionnement de l’architecture neuronale proposée. Enfin, nous mettons en place une mesure de confiance permettant d’évaluer la fiabilité d’une hypothèse produite par notre système
This thesis is part of the deep learning applied to spoken language understanding. Until now, this task was performed through a pipeline of components implementing, for example, a speech recognition system, then different natural language processing, before involving a language understanding system on enriched automatic transcriptions. Recently, work in the field of speech recognition has shown that it is possible to produce a sequence of words directly from the acoustic signal. Within the framework of this thesis, the aim is to exploit these advances and extend them to design a system composed of a single neural model fully optimized for the spoken language understanding task, from signal to concept. First, we present a state of the art describing the principles of deep learning, speech recognition, and speech understanding. Then, we describe the contributions made along three main axes. We propose a first system answering the problematic posed and apply it to a task of named entities recognition. Then, we propose a transfer learning strategy guided by a curriculum learning approach. This strategy is based on the generic knowledge learned to improve the performance of a neural system on a semantic concept extraction task. Then, we perform an analysis of the errors produced by our approach, while studying the functioning of the proposed neural architecture. Finally, we set up a confidence measure to evaluate the reliability of a hypothesis produced by our system
7

Kulkarni, Swarnim. "Capturing semantics using a link analysis based concept extractor approach." Thesis, Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/1526.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Mendes, Pablo N. "Adaptive Semantic Annotation of Entity and Concept Mentions in Text." Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1401665504.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tolle, Kristin M. "Domain-independent semantic concept extraction using corpus linguistics, statistics and artificial intelligence techniques." Diss., The University of Arizona, 2003. http://hdl.handle.net/10150/280502.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
For this dissertation two software applications were developed and three experiments were conducted to evaluate the viability of a unique approach to medical information extraction. The first system, the AZ Noun Phraser, was designed as a concept extraction tool. The second application, ANNEE, is a neural net-based entity extraction (EE) system. These two systems were combined to perform concept extraction and semantic classification specifically for use in medical document retrieval systems. The goal of this research was to create a system that automatically (without human interaction) enabled semantic type assignment, such as gene name and disease, to concepts extracted from unstructured medical text documents. Improving conceptual analysis of search phrases has been shown to improve the precision of information retrieval systems. Enabling this capability in the field of medicine can aid medical researchers, doctors and librarians in locating information, potentially improving healthcare decision-making. Due to the flexibility and non-domain specificity of the implementation, these applications have also been successfully deployed in other text retrieval experimentation for law enforcement (Atabakhsh et al., 2001; Hauck, Atabakhsh, Ongvasith, Gupta, & Chen, 2002), medicine (Tolle & Chen, 2000), query expansion (Leroy, Tolle, & Chen, 2000), web document categorization (Chen, Fan, Chau, & Zeng, 2001), Internet spiders (Chau, Zeng, & Chen, 2001), collaborative agents (Chau, Zeng, Chen, Huang, & Hendriawan, 2002), competitive intelligence (Chen, Chau, & Zeng, 2002), and Internet chat-room data visualization (Zhu & Chen, 2001).
10

Pelloin, Valentin. "La compréhension de la parole dans les systèmes de dialogues humain-machine à l'heure des modèles pré-entraînés." Electronic Thesis or Diss., Le Mans, 2024. http://www.theses.fr/2024LEMA1002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, la compréhension automatique de la parole (SLU) est étudiée dans le cadre applicatif de dialogues téléphoniques à buts définis (réservation de chambres d'hôtel par exemple). Historiquement, la SLU était réalisée en cascade : un système de reconnaissance de la parole réalisait une transcription en mots, puis un système de compréhension y associait une annotation sémantique. Le développement des méthodes neuronales profondes a fait émerger les architectures de bout-en-bout, où la tâche de compréhension est réalisée par un système unique, appliqué directement à partir du signal de parole pour en extraire l’annotation sémantique. Récemment, les modèles dits pré-entraînés de manière non supervisée (SSL) ont apporté de nouvelles avancées en traitement automatique des langues (TAL). Appris de façon générique sur de très grandes masses de données, ils peuvent ensuite être adaptés pour d'autres applications. À ce jour, les meilleurs résultats SLU sont obtenus avec des systèmes en cascade intégrant des modèles SSL.Cependant, aucune des architectures, cascade ou bout-en-bout, n'est parfaite. À travers cette thèse, nous étudions ces architectures et proposons des versions hybrides qui tentent de tirer parti des avantages de chacune. Après avoir développé un modèle SLU bout-en-bout à l’état de l’art, nous avons évalué différentes stratégies d’hybridation. Les avancées apportées par les modèles SSL en cours de thèse, nous ont amenés à les intégrer dans notre architecture hybride
In this thesis, spoken language understanding (SLU) is studied in the application context of telephone dialogues with defined goals (hotel booking reservations, for example). Historically, SLU was performed through a cascade of systems: a first system would transcribe the speech into words, and a natural language understanding system would link those words to a semantic annotation. The development of deep neural methods has led to the emergence of end-to-end architectures, where the understanding task is performed by a single system, applied directly to the speech signal to extract the semantic annotation. Recently, so-called self-supervised learning (SSL) pre-trained models have brought new advances in natural language processing (NLP). Learned in a generic way on very large datasets, they can then be adapted for other applications. To date, the best SLU results have been obtained with pipeline systems incorporating SSL models.However, none of the architectures, pipeline or end-to-end, is perfect. In this thesis, we study these architectures and propose hybrid versions that attempt to benefit from the advantages of each. After developing a state-of-the-art end-to-end SLU model, we evaluated different hybrid strategies. The advances made by SSL models during the course of this thesis led us to integrate them into our hybrid architecture

Книги з теми "Semantic concepts extraction":

1

Yang, Sijia, and Sandra González-Bailón. Semantic Networks and Applications in Public Opinion Research. Edited by Jennifer Nicoll Victor, Alexander H. Montgomery, and Mark Lubell. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780190228217.013.14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Semantic networks represent and model messages and discourse as a relational structure, emphasizing patterns of interdependence among semantic units or actors-concepts. This chapter traces the epistemological roots of semantic networks, then illustrates with examples how this approach can contribute to the study of political rhetoric or opinions. It focuses on three levels of analysis: cognitive mapping at the individual level, discourse analysis at the interpersonal level, and framing and salience at the collective level. Drawing from the rich literature on natural language processing and machine learning, the chapter introduces readers to essential methodological considerations when extracting and building up semantic networks from textual data. It also offers a discussion on the relevance of semantic networks to analyzing public opinion, especially as it manifests in discursive and deliberative theories of democracy.

Частини книг з теми "Semantic concepts extraction":

1

Atapattu, Thushari, Katrina Falkner, and Nickolas Falkner. "Automated Extraction of Semantic Concepts from Semi-structured Data: Supporting Computer-Based Education through the Analysis of Lecture Notes." In Lecture Notes in Computer Science, 161–75. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32600-4_13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Cavaliere, Danilo, and Sabrina Senatore. "Emotional Concept Extraction Through Ontology-Enhanced Classification." In Metadata and Semantic Research, 52–63. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36599-8_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Tosi, Mauro Dalle Lucca, and Julio Cesar dos Reis. "C-Rank: A Concept Linking Approach to Unsupervised Keyphrase Extraction." In Metadata and Semantic Research, 236–47. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36599-8_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Yuan, Siyu, Deqing Yang, Jiaqing Liang, Jilun Sun, Jingyue Huang, Kaiyan Cao, Yanghua Xiao, and Rui Xie. "Large-Scale Multi-granular Concept Extraction Based on Machine Reading Comprehension." In The Semantic Web – ISWC 2021, 93–110. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88361-4_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Park, Kyung-Wook, and Dong-Ho Lee. "Full-Automatic High-Level Concept Extraction from Images Using Ontologies and Semantic Inference Rules." In The Semantic Web – ASWC 2006, 307–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11836025_31.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ghannay, Sahar, Antoine Caubrière, Salima Mdhaffar, Gaëlle Laperrière, Bassam Jabaian, and Yannick Estève. "Where Are We in Semantic Concept Extraction for Spoken Language Understanding?" In Speech and Computer, 202–13. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87802-3_19.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

AL-Aswadi, Fatima N., Huah Yong Chan, and Keng Hoon Gan. "Extracting Semantic Concepts and Relations from Scientific Publications by Using Deep Learning." In Lecture Notes on Data Engineering and Communications Technologies, 374–83. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70713-2_35.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Li, Yanlin, and Chu-Ren Huang. "Extracting Concepts and Semantic Associates for Teaching Tang 300 Poems to L2 Learners." In Lecture Notes in Computer Science, 233–43. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-28953-8_18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zentgraf, Sven, Sherief Ali, and Markus König. "Concept for Enriching NISO-STS Standards with Machine-Readable Requirements and Validation Rules." In CONVR 2023 - Proceedings of the 23rd International Conference on Construction Applications of Virtual Reality, 718–28. Florence: Firenze University Press, 2023. http://dx.doi.org/10.36253/979-12-215-0289-3.72.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
During building project planning, various standards, such as material specifications, value ranges, and construction regulations, must be considered. When analyzing a regulation for its BIM-based use, it must be identified which information can be checked directly or indirectly using a BIM model. The basis for the directly checkable information requirements is the explicit description of object classes, object types, properties, and values. Additionally, complex validation rules can be derived from the standards. These information extractions are mostly performed manually and laboriously on text-based regulatory documents. To provide a better data format, the NISO proposed the Standard Tag Suite (NISO-STS), which is an XML format for publishing and exchanging full-text content and metadata of standards. This paper proposes a concept to enrich standards in NISO-STS format with information requirements and validation rules to provide a machine-interpretable semantic knowledge base for BIM processes. To achieve this, the concept utilizes natural language processing (NLP) methods to extract semantic information from the standards. Furthermore, the paper introduces a workflow to transfer the gathered knowledge into the XML-based standard. This allows the acquired semantic knowledge to be used BIM-based and directly updated in future versions of the standards. To show the applicability of the concept an approach is presented in which the obtained information is stored and used as a queryable knowledge base. The resulting database is used by a querying assistant, in which a user can enter keywords and questions that are translated into SPARQL queries to provide answers for the given input
10

Zentgraf, Sven, Sherief Ali, and Markus König. "Concept for Enriching NISO-STS Standards with Machine-Readable Requirements and Validation Rules." In CONVR 2023 - Proceedings of the 23rd International Conference on Construction Applications of Virtual Reality, 718–28. Florence: Firenze University Press, 2023. http://dx.doi.org/10.36253/10.36253/979-12-215-0289-3.72.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
During building project planning, various standards, such as material specifications, value ranges, and construction regulations, must be considered. When analyzing a regulation for its BIM-based use, it must be identified which information can be checked directly or indirectly using a BIM model. The basis for the directly checkable information requirements is the explicit description of object classes, object types, properties, and values. Additionally, complex validation rules can be derived from the standards. These information extractions are mostly performed manually and laboriously on text-based regulatory documents. To provide a better data format, the NISO proposed the Standard Tag Suite (NISO-STS), which is an XML format for publishing and exchanging full-text content and metadata of standards. This paper proposes a concept to enrich standards in NISO-STS format with information requirements and validation rules to provide a machine-interpretable semantic knowledge base for BIM processes. To achieve this, the concept utilizes natural language processing (NLP) methods to extract semantic information from the standards. Furthermore, the paper introduces a workflow to transfer the gathered knowledge into the XML-based standard. This allows the acquired semantic knowledge to be used BIM-based and directly updated in future versions of the standards. To show the applicability of the concept an approach is presented in which the obtained information is stored and used as a queryable knowledge base. The resulting database is used by a querying assistant, in which a user can enter keywords and questions that are translated into SPARQL queries to provide answers for the given input

Тези доповідей конференцій з теми "Semantic concepts extraction":

1

Hübner, Marc, Christoph Alt, Robert Schwarzenberg, and Leonhard Hennig. "Defx at SemEval-2020 Task 6: Joint Extraction of Concepts and Relations for Definition Extraction." In Proceedings of the Fourteenth Workshop on Semantic Evaluation. Stroudsburg, PA, USA: International Committee for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.semeval-1.92.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tian, Qingwen, Shixing Zhou, Yu Cheng, Jianxia Chen, Yi Gao, and Shuijing Zhang. "Curriculum Semantic Retrieval System based on Distant Supervision." In 7th International Conference on Software Engineering and Applications (SOFEA 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.111603.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Knowledge Graph is a semantic network that reveals the relationship between entities, which construction is to describe various entities, concepts and their relationships in the real world. Since knowledge graph can effectively reveal the relationship between the different knowledge items, it has been widely utilized in the intelligent education. In particular, relation extraction is the critical part of knowledge graph and plays a very important role in the construction of knowledge graph. According to the different magnitude of data labeling, entity relationship extraction tasks of deep learning can be divided into two categories: supervised and distant supervised. Supervised learning approaches can extract effective entity relationships. However, these approaches rely on labeled data heavily resulting in the time-consuming and laborconsuming. The distant supervision approach is widely concerned by researchers because it can generate the entity relation extraction automatically. However, the development and application of the distant supervised approach has been seriously hindered due to the noises, lack of information and disequilibrium in the relation extraction tasks. Inspired by the above analysis, the paper proposes a novel curriculum points relationship extraction model based on the distant supervision. In particular, firstly the research of the distant supervised relationship extraction model based on the sentence bag attention mechanism to extract the relationship of curriculum points. Secondly, the research of knowledge graph construction based on the knowledge ontology. Thirdly, the development of curriculum semantic retrieval platform based on Web. Compared with the existing advanced models, the AUC of this system is increased by 14.2%; At the same time, taking "big data processing" course in computer field as an example, the relationship extraction result with F1 value of 88.1% is realized. The experimental results show that the proposed model provides an effective solution for the development and application of knowledge graph in the field of intelligent education.
3

Chen, Jiaoyan, Freddy Lecue, Jeff Z. Pan, and Huajun Chen. "Learning from Ontology Streams with Semantic Concept Drift." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/133.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Data stream learning has been largely studied for extracting knowledge structures from continuous and rapid data records. In the semantic Web, data is interpreted in ontologies and its ordered sequence is represented as an ontology stream. Our work exploits the semantics of such streams to tackle the problem of concept drift i.e., unexpected changes in data distribution, causing most of models to be less accurate as time passes. To this end we revisited (i) semantic inference in the context of supervised stream learning, and (ii) models with semantic embeddings. The experiments show accurate prediction with data from Dublin and Beijing.
4

Takeda, Hideaki, Susumu Hamada, Tetsuo Tomiyama, and Hiroyuki Yoshikawa. "A Cognitive Approach to the Analysis of Design Processes." In ASME 1990 Design Technical Conferences. American Society of Mechanical Engineers, 1990. http://dx.doi.org/10.1115/detc1990-0121.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The scientific analysis of design is indispensable in order to establish a rich and useful design theory. Accomplishing this, we propose a practical method to investigate design named design experiment and methods to analyze its results. Since the design experiment is performed mainly with the protocol analysis method, We start from discussing the experimental method in comparison to psychological experiments. Furthermore we introduce a new method using a CAD-like system which can record drawing processes precisely. We analyze the protocol data in two ways. One is the analysis by extraction of knowledge. Using this we can clarify how knowledge is used and what knowledge is needed in design processes. The other is based on the cognitive approach. Protocol data is transformed into the semantic network of concepts which is used as the network in the connectionist paradigm. We can identify what the designer pays attention to and how it is changing in design processes by regarding the activations of nodes as the intensity of the designer’s attention.
5

Shi, Botian, Lei Ji, Pan Lu, Zhendong Niu, and Nan Duan. "Knowledge Aware Semantic Concept Expansion for Image-Text Matching." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/720.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Image-text matching is a vital cross-modality task in artificial intelligence and has attracted increasing attention in recent years. Existing works have shown that learning semantic concepts is useful to enhance image representation and can significantly improve the performance of both image-to-text and text-to-image retrieval. However, existing models simply detect semantic concepts from a given image, which are less likely to deal with long-tail and occlusion concepts. Frequently co-occurred concepts in the same scene, e.g. bedroom and bed, can provide common-sense knowledge to discover other semantic-related concepts. In this paper, we develop a Scene Concept Graph (SCG) by aggregating image scene graphs and extracting frequently co-occurred concept pairs as scene common-sense knowledge. Moreover, we propose a novel model to incorporate this knowledge to improve image-text matching. Specifically, semantic concepts are detected from images and then expanded by the SCG. After learning to select relevant contextual concepts, we fuse their representations with the image embedding feature to feed into the matching module. Extensive experiments are conducted on Flickr30K and MSCOCO datasets, and prove that our model achieves state-of-the-art results due to the effectiveness of incorporating the external SCG.
6

Popa, Ramona cristina, Nicolae Goga, and Bujor ionel Pavaloiu. "PROVIDING SEMANTICALLY-ENABLED INFORMATION FOR SMES KNOWLEDGE WORKERS: MULTI-AGENT-BASED MIDDLEWARE." In eLSE 2019. Carol I National Defence University Publishing House, 2019. http://dx.doi.org/10.12753/2066-026x-19-046.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The objective of this research is to present a Multi-agent-based middleware that provides semantically-enabled information for SmES knowledge workers. Market currently offers several instruments for enabling knowledge management such as database management systems, data warehouse, intranet and extranet knowledge portals. However, these technological solutions do not take into account that knowledge management practices in small and medium companies are more congruous with apprenticeship-based learning rather than the formal training typical of big companies. This software is based on the European project E! 9770 PrEmISES and it helps small and medium enterprises to better exploit their information spaces. One central piece is the ontology component. In our days, ontologies are playing an important role. Many computer science domains, including software engineering, online learning, education and knowledge extraction are using ontologies in order to organize and share information in a semantic way. PrEmISES was the capability to couple with the existing data systems that are used by small and medium companies and in this way, to enhance them with a semantic layer/engine. The engine is used to find organizational documents within companies and make the searches more accurate by the use of ontologies. Premises is an improved software with educational purposes, based on the ubiquitous learning paradigm, that uses ontologies in order to find and organize relevant organizational documents, based on the employees work profile. In this way, employees can benefit of a fast-educational process (i.e. online learning) based on their individual work profiles The engine is semantically enriched, meaning that it is searching for the specified words/query plus for semantically related concepts. In this paper we present PrEmISES architecture. We will present the main components and the main steps that were followed in order to develop the ontologies that were used for the ontology component.
7

Tonelli, Sara, Marco Rospocher, Emanuele Pianta, and Luciano Serafini. "Boosting Collaborative Ontology Building with Key-Concept Extraction." In 2011 IEEE Fifth International Conference on Semantic Computing (ICSC). IEEE, 2011. http://dx.doi.org/10.1109/icsc.2011.21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kang, SungKu, Lalit Patil, Arvind Rangarajan, Abha Moitra, Tao Jia, Dean Robinson, and Debasish Dutta. "Extraction of Manufacturing Rules From Unstructured Text Using a Semantic Framework." In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-47556.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Formal ontology and rule-based approaches founded on semantic technologies have been proposed as powerful mechanisms to enable early manufacturability feedback. A fundamental unresolved problem in this context is that all manufacturing knowledge is encoded in unstructured text and there are no reliable methods to automatically convert it to formal ontologies and rules. It is impractical for engineers to write accurate domain rules in a structured semantic languages such as Web Ontology Language (OWL) or Semantic Application Design Language (SADL). Previous efforts in manufacturing research that have targeted extraction of OWL ontologies from text have focused on basic concept names and hierarchies. This paper presents a semantics-based framework for acquiring more complex manufacturing knowledge, primarily rules, in a semantically-usable form from unstructured English text such as those written in manufacturing handbooks. The approach starts with existing domain knowledge in the form of OWL ontologies and applies natural language processing techniques to extract dependencies between different words in the text that contains the rule. Domain-specific triples capturing each rule are then extracted from each dependency graph. Finally, new computer-interpretable rules are composed from the triples. The feasibility of the framework has been evaluated by automatically and accurately generating rules for manufacturability from a manufacturing handbook. The paper also documents the cases that result in ambiguous results. Analysis of the results shows that the proposed framework can be extended to extract domain ontologies which forms part of the ongoing work that also focuses on addressing challenges to automate different steps and improve the reliability of the system.
9

Arnold, Patrick, and Erhard Rahm. "Extracting Semantic Concept Relations from Wikipedia." In the 4th International Conference. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2611040.2611079.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wei, Xiao, and Xiangfeng Luo. "Concept Extraction based on Association Linked Network." In 2010 Sixth International Conference on Semantics Knowledge and Grid (SKG). IEEE, 2010. http://dx.doi.org/10.1109/skg.2010.11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії