Artykuły w czasopismach na temat „Knowledge graph refinement”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Knowledge graph refinement.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 48 najlepszych artykułów w czasopismach naukowych na temat „Knowledge graph refinement”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Zhang, Dehai, Menglong Cui, Yun Yang, Po Yang, Cheng Xie, Di Liu, Beibei Yu i Zhibo Chen. "Knowledge Graph-Based Image Classification Refinement". IEEE Access 7 (2019): 57678–90. http://dx.doi.org/10.1109/access.2019.2912627.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Hsueh, Huei-Chia, Shuo-Chen Chien, Chih-Wei Huang, Hsuan-Chia Yang, Usman Iqbal, Li-Fong Lin i Wen-Shan Jian. "A novel Multi-Level Refined (MLR) knowledge graph design and chatbot system for healthcare applications". PLOS ONE 19, nr 1 (31.01.2024): e0296939. http://dx.doi.org/10.1371/journal.pone.0296939.

Pełny tekst źródła
Streszczenie:
Imagine having a knowledge graph that can extract medical health knowledge related to patient diagnosis solutions and treatments from thousands of research papers, distilled using machine learning techniques in healthcare applications. Medical doctors can quickly determine treatments and medications for urgent patients, while researchers can discover innovative treatments for existing and unknown diseases. This would be incredible! Our approach serves as an all-in-one solution, enabling users to employ a unified design methodology for creating their own knowledge graphs. Our rigorous validation process involves multiple stages of refinement, ensuring that the resulting answers are of the utmost professionalism and solidity, surpassing the capabilities of other solutions. However, building a high-quality knowledge graph from scratch, with complete triplets consisting of subject entities, relations, and object entities, is a complex and important task that requires a systematic approach. To address this, we have developed a comprehensive design flow for knowledge graph development and a high-quality entities database. We also developed knowledge distillation schemes that allow you to input a keyword (entity) and display all related entities and relations. Our proprietary methodology, multiple levels refinement (MLR), is a novel approach to constructing knowledge graphs and refining entities level-by-level. This ensures the generation of high-quality triplets and a readable knowledge graph through keyword searching. We have generated multiple knowledge graphs and developed a scheme to find the corresponding inputs and outputs of entity linking. Entities with multiple inputs and outputs are referred to as joints, and we have created a joint-version knowledge graph based on this. Additionally, we developed an interactive knowledge graph, providing a user-friendly environment for medical professionals to explore entities related to existing or unknown treatments/diseases. Finally, we have advanced knowledge distillation techniques.
Style APA, Harvard, Vancouver, ISO itp.
3

Kayali, Moe, i Dan Suciu. "Quasi-Stable Coloring for Graph Compression". Proceedings of the VLDB Endowment 16, nr 4 (grudzień 2022): 803–15. http://dx.doi.org/10.14778/3574245.3574264.

Pełny tekst źródła
Streszczenie:
We propose quasi-stable coloring , an approximate version of stable coloring. Stable coloring, also called color refinement, is a well-studied technique in graph theory for classifying vertices, which can be used to build compact, lossless representations of graphs. However, its usefulness is limited due to its reliance on strict symmetries. Real data compresses very poorly using color refinement. We propose the first, to our knowledge, approximate color refinement scheme, which we call quasi-stable coloring. By using approximation, we alleviate the need for strict symmetry, and allow for a tradeoff between the degree of compression and the accuracy of the representation. We study three applications: Linear Programming, Max-Flow, and Betweenness Centrality, and provide theoretical evidence in each case that a quasi-stable coloring can lead to good approximations on the reduced graph. Next, we consider how to compute a maximal quasi-stable coloring: we prove that, in general, this problem is NP-hard, and propose a simple, yet effective algorithm based on heuristics. Finally, we evaluate experimentally the quasi-stable coloring technique on several real graphs and applications, comparing with prior approximation techniques.
Style APA, Harvard, Vancouver, ISO itp.
4

Paulheim, Heiko. "Knowledge graph refinement: A survey of approaches and evaluation methods". Semantic Web 8, nr 3 (6.12.2016): 489–508. http://dx.doi.org/10.3233/sw-160218.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Zhang, Yichong, i Yongtao Hao. "Traditional Chinese Medicine Knowledge Graph Construction Based on Large Language Models". Electronics 13, nr 7 (7.04.2024): 1395. http://dx.doi.org/10.3390/electronics13071395.

Pełny tekst źródła
Streszczenie:
This study explores the use of large language models in constructing a knowledge graph for Traditional Chinese Medicine (TCM) to improve the representation, storage, and application of TCM knowledge. The knowledge graph, based on a graph structure, effectively organizes entities, attributes, and relationships within the TCM domain. By leveraging large language models, we collected and embedded substantial TCM–related data, generating precise representations transformed into a knowledge graph format. Experimental evaluations confirmed the accuracy and effectiveness of the constructed graph, extracting various entities and their relationships, providing a solid foundation for TCM learning, research, and application. The knowledge graph has significant potential in TCM, aiding in teaching, disease diagnosis, treatment decisions, and contributing to TCM modernization. In conclusion, this paper utilizes large language models to construct a knowledge graph for TCM, offering a vital foundation for knowledge representation and application in the field, with potential for future expansion and refinement.
Style APA, Harvard, Vancouver, ISO itp.
6

Aldughayfiq, Bader, Farzeen Ashfaq, N. Z. Jhanjhi i Mamoona Humayun. "Capturing Semantic Relationships in Electronic Health Records Using Knowledge Graphs: An Implementation Using MIMIC III Dataset and GraphDB". Healthcare 11, nr 12 (15.06.2023): 1762. http://dx.doi.org/10.3390/healthcare11121762.

Pełny tekst źródła
Streszczenie:
Electronic health records (EHRs) are an increasingly important source of information for healthcare professionals and researchers. However, EHRs are often fragmented, unstructured, and difficult to analyze due to the heterogeneity of the data sources and the sheer volume of information. Knowledge graphs have emerged as a powerful tool for capturing and representing complex relationships within large datasets. In this study, we explore the use of knowledge graphs to capture and represent complex relationships within EHRs. Specifically, we address the following research question: Can a knowledge graph created using the MIMIC III dataset and GraphDB effectively capture semantic relationships within EHRs and enable more efficient and accurate data analysis? We map the MIMIC III dataset to an ontology using text refinement and Protege; then, we create a knowledge graph using GraphDB and use SPARQL queries to retrieve and analyze information from the graph. Our results demonstrate that knowledge graphs can effectively capture semantic relationships within EHRs, enabling more efficient and accurate data analysis. We provide examples of how our implementation can be used to analyze patient outcomes and identify potential risk factors. Our results demonstrate that knowledge graphs are an effective tool for capturing semantic relationships within EHRs, enabling a more efficient and accurate data analysis. Our implementation provides valuable insights into patient outcomes and potential risk factors, contributing to the growing body of literature on the use of knowledge graphs in healthcare. In particular, our study highlights the potential of knowledge graphs to support decision-making and improve patient outcomes by enabling a more comprehensive and holistic analysis of EHR data. Overall, our research contributes to a better understanding of the value of knowledge graphs in healthcare and lays the foundation for further research in this area.
Style APA, Harvard, Vancouver, ISO itp.
7

Dong, Qian, Shuzi Niu, Tao Yuan i Yucheng Li. "Disentangled Graph Recurrent Network for Document Ranking". Data Science and Engineering 7, nr 1 (15.02.2022): 30–43. http://dx.doi.org/10.1007/s41019-022-00179-3.

Pełny tekst źródła
Streszczenie:
AbstractBERT-based ranking models are emerging for its superior natural language understanding ability. All word relations and representations in the concatenation of query and document are modeled in the self-attention matrix as latent knowledge. However, some latent knowledge has none or negative effect on the relevance prediction between query and document. We model the observable and unobservable confounding factors in a causal graph and perform do-query to predict the relevance label given an intervention over this graph. For the observed factors, we block the back door path by an adaptive masking method through the transformer layer and refine word representations over this disentangled word graph through the refinement layer. For the unobserved factors, we resolve the do-operation query from the front door path by decomposing word representations into query related and unrelated parts through the decomposition layer. Pairwise ranking loss is mainly used for the ad hoc document ranking task, triangle distance loss is introduced to both the transformer and refinement layers for more discriminative representations, and mutual information constraints are put on the decomposition layer. Experimental results on public benchmark datasets TREC Robust04 and WebTrack2009-12 show that DGRe outperforms state-of-the-art baselines more than 2% especially for short queries.
Style APA, Harvard, Vancouver, ISO itp.
8

Fauceglia, Nicolas, Mustafa Canim, Alfio Gliozzo, Jennifer J. Liang, Nancy Xin Ru Wang, Douglas Burdick, Nandana Mihindukulasooriya i in. "KAAPA: Knowledge Aware Answers from PDF Analysis". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 18 (18.05.2021): 16029–31. http://dx.doi.org/10.1609/aaai.v35i18.18002.

Pełny tekst źródła
Streszczenie:
We present KaaPa (Knowledge Aware Answers from Pdf Analysis), an integrated solution for machine reading comprehension over both text and tables extracted from PDFs. KaaPa enables interactive question refinement using facets generated from an automatically induced Knowledge Graph. In addition it provides a concise summary of the supporting evidence for the provided answers by aggregating information across multiple sources. KaaPa can be applied consistently to any collection of documents in English with zero domain adaptation effort. We showcase the use of KaaPa for QA on scientific literature using the COVID-19 Open Research Dataset.
Style APA, Harvard, Vancouver, ISO itp.
9

Koutra, Danai. "The power of summarization in graph mining and learning". Proceedings of the VLDB Endowment 14, nr 13 (wrzesień 2021): 3416. http://dx.doi.org/10.14778/3484224.3484238.

Pełny tekst źródła
Streszczenie:
Our ability to generate, collect, and archive data related to everyday activities, such as interacting on social media, browsing the web, and monitoring well-being, is rapidly increasing. Getting the most benefit from this large-scale data requires analysis of patterns it contains, which is computationally intensive or even intractable. Summarization techniques produce compact data representations (summaries) that enable faster processing by complex algorithms and queries. This talk will cover summarization of interconnected data (graphs) [3], which can represent a variety of natural processes (e.g., friendships, communication). I will present an overview of my group's work on bridging the gap between research on summarized network representations and real-world problems. Examples include summarization of massive knowledge graphs for refinement [2] and on-device querying [4], summarization of graph streams for persistent activity detection [1], and summarization within graph neural networks for fast, interpretable classification [5]. I will conclude with open challenges and opportunities for future research.
Style APA, Harvard, Vancouver, ISO itp.
10

Huang, Yu-Xuan, Wang-Zhou Dai, Yuan Jiang i Zhi-Hua Zhou. "Enabling Knowledge Refinement upon New Concepts in Abductive Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 7 (26.06.2023): 7928–35. http://dx.doi.org/10.1609/aaai.v37i7.25959.

Pełny tekst źródła
Streszczenie:
Recently there are great efforts on leveraging machine learning and logical reasoning. Many approaches start from a given knowledge base, and then try to utilize the knowledge to help machine learning. In real practice, however, the given knowledge base can often be incomplete or even noisy, and thus, it is crucial to develop the ability of knowledge refinement or enhancement. This paper proposes to enable the Abductive learning (ABL) paradigm to have the ability of knowledge refinement/enhancement. In particular, we focus on the problem that, in contrast to closed-environment tasks where a fixed set of symbols are enough to represent the concepts in the domain, in open-environment tasks new concepts may emerge. Ignoring those new concepts can lead to significant performance decay, whereas it is challenging to identify new concepts and add them to the existing knowledge base with potential conflicts resolved. We propose the ABL_nc approach which exploits machine learning in ABL to identify new concepts from data, exploits knowledge graph to match them with entities, and refines existing knowledge base to resolve conflicts. The refined/enhanced knowledge base can then be used in the next loop of ABL and help improve the performance of machine learning. Experiments on three neuro-symbolic learning tasks verified the effectiveness of the proposed approach.
Style APA, Harvard, Vancouver, ISO itp.
11

Kim Cheng, A. M., i Hsiu-yen Tsai. "A graph-based approach for timing analysis and refinement of OPS5 knowledge-based systems". IEEE Transactions on Knowledge and Data Engineering 16, nr 2 (luty 2004): 271–88. http://dx.doi.org/10.1109/tkde.2004.1269603.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Zhuang, Weibin, Taihua Zhang, Liguo Yao, Yao Lu i Panliang Yuan. "A Research on Image Semantic Refinement Recognition of Product Surface Defects Based on Causal Knowledge". Applied Sciences 12, nr 17 (2.09.2022): 8828. http://dx.doi.org/10.3390/app12178828.

Pełny tekst źródła
Streszczenie:
The images of surface defects of industrial products contain not only the defect type but also the causal logic related to defective design and manufacturing. This information is recessive and unstructured and difficult to find and use, which cannot provide an apriori basis for solving the problem of product defects in design and manufacturing. Therefore, in this paper, we propose an image semantic refinement recognition method based on causal knowledge for product surface defects. Firstly, an improved ResNet was designed to improve the image classification effect. Then, the causal knowledge graph of surface defects was constructed and stored in Neo4j. Finally, a visualization platform for causal knowledge analysis was developed to realize the causal visualization of the defects in the causal knowledge graph driven by the output data of the network model. In addition, the method is validated by the surface defects dataset. The experimental results show that the average accuracy, recall, and precision of the improved ResNet are improved by 11%, 8.15%, and 8.3%, respectively. Through the application of the visualization platform, the cause results obtained are correct by related analysis and comparison, which can effectively represent the cause of aluminum profile surface defects, verifying the effectiveness of the method proposed in this paper.
Style APA, Harvard, Vancouver, ISO itp.
13

Osipova, Irina, i Veselina Gospodinova. "Representation of the process of sudden outbursts of coal and gas using a knowledge graph". E3S Web of Conferences 192 (2020): 04022. http://dx.doi.org/10.1051/e3sconf/202019204022.

Pełny tekst źródła
Streszczenie:
In the process of developing a coal deposit, significant amounts of data and extensive knowledge about the object of operation are accumulated. This data and knowledge may not be structured, not reliable, contradictory. In turn, for the further conduct of the entire complex of mining operations and the development of a coal mining enterprise, structured and reliable data and knowledge are needed. The aim of the study is to propose structuring knowledge about the process of sudden outbursts of coal and gas by presenting knowledge about the subject of research in the form of an elementary knowledge graph. The study is based on an ontological approach to solving the issue of safe mining, namely, to study the problem of sudden emissions of coal and gas from the standpoint of the fact that the release is considered before it has occurred. To solve the existing problem, it is proposed to create an elementary knowledge graph that takes into account geological, hydrogeological, geophysical, mining information about the subsoil use object, as well as geomechanical and geodynamic processes, and physicochemical mass transfer processes occurring in the coal seam and the accumulated experience and knowledge of miners using methods Data mining. As a result of the study, we can conclude that it is necessary to create a network of elementary knowledge graphs, and use other methods of knowledge extraction. For further analysis and refinement of data and knowledge about the process of sudden coal and gas emissions.
Style APA, Harvard, Vancouver, ISO itp.
14

Abedini, Farhad, Mohammad Reza Keyvanpour i Mohammad Bagher Menhaj. "Correction Tower: A General Embedding Method of the Error Recognition for the Knowledge Graph Correction". International Journal of Pattern Recognition and Artificial Intelligence 34, nr 10 (8.01.2020): 2059034. http://dx.doi.org/10.1142/s021800142059034x.

Pełny tekst źródła
Streszczenie:
Today, knowledge graphs (KGs) are growing by enrichment and refinement methods. The enrichment and refinement can be gained using the correction and completion of the KG. The studies of the KG completion are rich, but less attention has been paid to the methods of the KG error correction. The correction methods are divided into embedding and nonembedding methods. Embedding correction methods have been recently introduced in which a KG is embedded into a vector space. Also, existing correction approaches focused on the recognition of the three types of errors, the outliers, inconsistencies and erroneous relations. One of the challenges is that most outlier correction methods can recognize only numeric outlier entities by nonembedding methods. On the other hand, inconsistency errors are recognized during the knowledge extraction step and existing methods of this field do not pay attention to the recognition of these errors as post-correction by embedding methods. Also, to correct erroneous relations, new embedding techniques have not been used. Since the errors of a KG are variant and there is no method to cover all of them, a new general correction method is proposed in this paper. This method is called correction tower in which these three error types are corrected in three trays. In this correction tower, a new configuration will be suggested to solve the above challenges. For this aim, a new embedding method is proposed for each tray. Finally, the evaluation results show that the proposed correction tower can improve the KG error correction methods and proposed configuration can outperform previous results.
Style APA, Harvard, Vancouver, ISO itp.
15

Mohammadi, Bahram, Yicong Hong, Yuankai Qi, Qi Wu, Shirui Pan i Javen Qinfeng Shi. "Augmented Commonsense Knowledge for Remote Object Grounding". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 5 (24.03.2024): 4269–77. http://dx.doi.org/10.1609/aaai.v38i5.28223.

Pełny tekst źródła
Streszczenie:
The vision-and-language navigation (VLN) task necessitates an agent to perceive the surroundings, follow natural language instructions, and act in photo-realistic unseen environments. Most of the existing methods employ the entire image or object features to represent navigable viewpoints. However, these representations are insufficient for proper action prediction, especially for the REVERIE task, which uses concise high-level instructions, such as “Bring me the blue cushion in the master bedroom”. To address enhancing representation, we propose an augmented commonsense knowledge model (ACK) to leverage commonsense information as a spatio-temporal knowledge graph for improving agent navigation. Specifically, the proposed approach involves constructing a knowledge base by retrieving commonsense information from ConceptNet, followed by a refinement module to remove noisy and irrelevant knowledge. We further present ACK which consists of knowledge graph-aware cross-modal and concept aggregation modules to enhance visual representation and visual-textual data alignment by integrating visible objects, commonsense knowledge, and concept history, which includes object and knowledge temporal information. Moreover, we add a new pipeline for the commonsense-based decision-making process which leads to more accurate local action prediction. Experimental results demonstrate our proposed model noticeably outperforms the baseline and archives the state-of-the-art on the REVERIE benchmark. The source code is available at https://github.com/Bahram-Mohammadi/ACK.
Style APA, Harvard, Vancouver, ISO itp.
16

Gupta, Sudhanshu, i Krishna Kumar Tiwari. "Exploratory Search Prompt Generation using n-Degree Connection in Knowledge Graph". Asian Journal of Research in Computer Science 16, nr 4 (9.12.2023): 318–26. http://dx.doi.org/10.9734/ajrcos/2023/v16i4393.

Pełny tekst źródła
Streszczenie:
Search engines play a vital role in retrieving in- formation, but users often struggle to express their precise information needs, resulting in less-than-optimal search results. Therefore, enhancing search query refinement is crucial to elevate the accuracy and relevance of search outcomes. One particular challenge that existing search engines face is presenting refined results for queries containing two or more unrelated entities. This paper presents a novel approach for efficient search prompt generation by leveraging connected nodes and attributes in the knowledge graph. We propose a comprehensive exploration technique that explores the n-degrees connections and their at- tributes to generate all possible imaginative prompts. We realized that n-degrees connection exploration is an expensive task, hence we conducted experiments to determine the effectiveness of 2- degree exploration prompts in covering all the user-asked queries within the provided dataset. In testing with approximately 2000 queries on a related knowl- edge graph dataset, we found out that our proposed methodol- ogy significantly improved query coverage. At n=1 depth, the coverage increased from 58% without the methodology to 84% with it. At n=2 depth, the coverage rose from 92% without the methodology to nearly 99% with it. Additionally, due to our question caching strategy at n=2, we observed faster response times for all questions.
Style APA, Harvard, Vancouver, ISO itp.
17

Zhang, Litian, Xiaoming Zhang, Ziyi Zhou, Feiran Huang i Chaozhuo Li. "Reinforced Adaptive Knowledge Learning for Multimodal Fake News Detection". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 15 (24.03.2024): 16777–85. http://dx.doi.org/10.1609/aaai.v38i15.29618.

Pełny tekst źródła
Streszczenie:
Nowadays, detecting multimodal fake news has emerged as a foremost concern since the widespread dissemination of fake news may incur adverse societal impact. Conventional methods generally focus on capturing the linguistic and visual semantics within the multimodal content, which fall short in effectively distinguishing the heightened level of meticulous fabrications. Recently, external knowledge is introduced to provide valuable background facts as complementary to facilitate news detection. Nevertheless, existing knowledge-enhanced endeavors directly incorporate all knowledge contexts through static entity embeddings, resulting in the potential noisy and content-irrelevant knowledge. Moreover, the integration of knowledge entities makes it intractable to model the sophisticated correlations between multimodal semantics and knowledge entities. In light of these limitations, we propose a novel Adaptive Knowledge-Aware Fake News Detection model, dubbed AKA-Fake. For each news, AKA-Fake learns a compact knowledge subgraph under a reinforcement learning paradigm, which consists of a subset of entities and contextual neighbors in the knowledge graph, restoring the most informative knowledge facts. A novel heterogeneous graph learning module is further proposed to capture the reliable cross-modality correlations via topology refinement and modality-attentive pooling. Our proposal is extensively evaluated over three popular datasets, and experimental results demonstrate the superiority of AKA-Fake.
Style APA, Harvard, Vancouver, ISO itp.
18

Li, Chunhua, Pengpeng Zhao, Victor S. Sheng, Xuefeng Xian, Jian Wu i Zhiming Cui. "Refining Automatically Extracted Knowledge Bases Using Crowdsourcing". Computational Intelligence and Neuroscience 2017 (2017): 1–17. http://dx.doi.org/10.1155/2017/4092135.

Pełny tekst źródła
Streszczenie:
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.
Style APA, Harvard, Vancouver, ISO itp.
19

Xu, Lin, Qixian Zhou, Ke Gong, Xiaodan Liang, Jianheng Tang i Liang Lin. "End-to-End Knowledge-Routed Relational Dialogue System for Automatic Diagnosis". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 7346–53. http://dx.doi.org/10.1609/aaai.v33i01.33017346.

Pełny tekst źródła
Streszczenie:
Beyond current conversational chatbots or task-oriented dialogue systems that have attracted increasing attention, we move forward to develop a dialogue system for automatic medical diagnosis that converses with patients to collect additional symptoms beyond their self-reports and automatically makes a diagnosis. Besides the challenges for conversational dialogue systems (e.g. topic transition coherency and question understanding), automatic medical diagnosis further poses more critical requirements for the dialogue rationality in the context of medical knowledge and symptom-disease relations. Existing dialogue systems (Madotto, Wu, and Fung 2018; Wei et al. 2018; Li et al. 2017) mostly rely on datadriven learning and cannot be able to encode extra expert knowledge graph. In this work, we propose an End-to-End Knowledge-routed Relational Dialogue System (KR-DS) that seamlessly incorporates rich medical knowledge graph into the topic transition in dialogue management, and makes it cooperative with natural language understanding and natural language generation. A novel Knowledge-routed Deep Q-network (KR-DQN) is introduced to manage topic transitions, which integrates a relational refinement branch for encoding relations among different symptoms and symptomdisease pairs, and a knowledge-routed graph branch for topic decision-making. Extensive experiments on a public medical dialogue dataset show our KR-DS significantly beats stateof-the-art methods (by more than 8% in diagnosis accuracy). We further show the superiority of our KR-DS on a newly collected medical dialogue system dataset, which is more challenging retaining original self-reports and conversational data between patients and doctors.
Style APA, Harvard, Vancouver, ISO itp.
20

Huang, Wen Tao, Pei Lu Niu, Yin Feng Liu i Wei Jie Wang. "Spur Bevel Gearbox Fault Diagnosis Based on Wavelet Packet Transform for Feature Extraction and Flow Graph Data Mining". Advanced Materials Research 753-755 (sierpień 2013): 2297–302. http://dx.doi.org/10.4028/www.scientific.net/amr.753-755.2297.

Pełny tekst źródła
Streszczenie:
Gearbox vibration signal contains a wealth of the gear status information, used wavelet packet transform (WPT) refinement of the partial lock ability to extract the fault signs attribute information in the vibration signal. Extracted signs attribute information as the input of the flow graph (FG), generated decision rules to achieve the purpose of fault diagnosis. FG was a knowledge representation and data mining method to mine the intrinsic link between the data and improve the clarity of the potential knowledge. The results confirmed that used of WPT feature extraction and FG data mining method can accurate detection the gear fault.
Style APA, Harvard, Vancouver, ISO itp.
21

Fan, Yan, Yu Wang, Pengfei Zhu i Qinghua Hu. "Dynamic Sub-graph Distillation for Robust Semi-supervised Continual Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 11 (24.03.2024): 11927–35. http://dx.doi.org/10.1609/aaai.v38i11.29079.

Pełny tekst źródła
Streszczenie:
Continual learning (CL) has shown promising results and comparable performance to learning at once in a fully supervised manner. However, CL strategies typically require a large number of labeled samples, making their real-life deployment challenging. In this work, we focus on semi-supervised continual learning (SSCL), where the model progressively learns from partially labeled data with unknown categories. We provide a comprehensive analysis of SSCL and demonstrate that unreliable distributions of unlabeled data lead to unstable training and refinement of the progressing stages. This problem severely impacts the performance of SSCL. To address the limitations, we propose a novel approach called Dynamic Sub-Graph Distillation (DSGD) for semi-supervised continual learning, which leverages both semantic and structural information to achieve more stable knowledge distillation on unlabeled data and exhibit robustness against distribution bias. Firstly, we formalize a general model of structural distillation and design a dynamic graph construction for the continual learning progress. Next, we define a structure distillation vector and design a dynamic sub-graph distillation algorithm, which enables end-to-end training and adaptability to scale up tasks. The entire proposed method is adaptable to various CL methods and supervision settings. Finally, experiments conducted on three datasets CIFAR10, CIFAR100, and ImageNet-100, with varying supervision ratios, demonstrate the effectiveness of our proposed approach in mitigating the catastrophic forgetting problem in semi-supervised continual learning scenarios. Our code is available: https://github.com/fanyan0411/DSGD.
Style APA, Harvard, Vancouver, ISO itp.
22

Qu, Fang, Youqiang Sun, Man Zhou, Liu Liu, Huamin Yang, Junqing Zhang, He Huang i Danfeng Hong. "Vegetation Land Segmentation with Multi-Modal and Multi-Temporal Remote Sensing Images: A Temporal Learning Approach and a New Dataset". Remote Sensing 16, nr 1 (19.12.2023): 3. http://dx.doi.org/10.3390/rs16010003.

Pełny tekst źródła
Streszczenie:
In recent years, remote sensing analysis has gained significant attention in visual analysis applications, particularly in segmenting and recognizing remote sensing images. However, the existing research has predominantly focused on single-period RGB image analysis, thus overlooking the complexities of remote sensing image capture, especially in highly vegetated land parcels. In this paper, we provide a large-scale vegetation remote sensing (VRS) dataset and introduce the VRS-Seg task for multi-modal and multi-temporal vegetation segmentation. The VRS dataset incorporates diverse modalities and temporal variations, and its annotations are organized using the Vegetation Knowledge Graph (VKG), thereby providing detailed object attribute information. To address the VRS-Seg task, we introduce VRSFormer, a critical pipeline that integrates multi-temporal and multi-modal data fusion, geometric contour refinement, and category-level classification inference. The experimental results demonstrate the effectiveness and generalization capability of our approach. The availability of VRS and the VRS-Seg task paves the way for further research in multi-modal and multi-temporal vegetation segmentation in remote sensing imagery.
Style APA, Harvard, Vancouver, ISO itp.
23

D’yachkova, Olga N., i Alexander E. Mikhailov. "Management of urban public green spaces". Stroitel'stvo: nauka i obrazovanie [Construction: Science and Education] 13, nr 1 (30.03.2023): 152–73. http://dx.doi.org/10.22227/2305-5502.2023.1.11.

Pełny tekst źródła
Streszczenie:
Introduction. People want effective management and balanced development of urbanised systems. In a comprehensive social, economic and environmental research of human living conditions in the city, various kinds of sociological surveys of the population are applied and foresight sessions are held with subject matter experts to analyse the existing level of safety and comfort of residence. However, in the context of growing urbanized systems, there is an acute shortage of new methods, ways and tools of knowing them for the purpose of effective management and balanced development. Materials and methods. The article presents aspects of the methodology for extracting and structuring knowledge of urban public green spaces in cities. The work is based on the paradigms of ontological engineering and knowledge management. Results. Ontological engineering as a theory and methodology for developing ontologies is actively developing. However, the main success lies in the field of knowledge formalization technology, while the methodology for extracting and structuring knowledge is still under development. The problem of meaningful analysis of the subject area remains open, the relevance of research of which is confirmed by sustainable development goal 11, target 11.7: “by 2030 provide universal access to safe, available and inclusive green spaces and public spaces, especially for women amd children, older and disabled people”. The article describes the process of developing a taxonomy of expert knowledge about urban public green spaces in city. The taxonomy includes classes, subclasses, properties for subclasses and options for properties. Conclusions. The results of the conceptualisation of knowledge of the subject can be used as elements in the construction of the knowledge graph framework. With appropriate refinement, the taxonomy can be in demand for scientific research, design of innovative services and intelligent systems used in urban planning and urban economy.
Style APA, Harvard, Vancouver, ISO itp.
24

Perera, S. N., N. Hetti Arachchige i D. Schneider. "Integration of Image Data for Refining Building Boundaries Derived from Point Clouds". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3 (11.08.2014): 253–58. http://dx.doi.org/10.5194/isprsarchives-xl-3-253-2014.

Pełny tekst źródła
Streszczenie:
Geometrically and topologically correct 3D building models are required to satisfy with new demands such as 3D cadastre, map updating, and decision making. More attention on building reconstruction has been paid using Airborne Laser Scanning (ALS) point cloud data. The planimetric accuracy of roof outlines, including step-edges is questionable in building models derived from only point clouds. This paper presents a new approach for the detection of accurate building boundaries by merging point clouds acquired by ALS and aerial photographs. It comprises two major parts: reconstruction of initial roof models from point clouds only, and refinement of their boundaries. A shortest closed circle (graph) analysis method is employed to generate building models in the first step. Having the advantages of high reliability, this method provides reconstruction without prior knowledge of primitive building types even when complex height jumps and various types of building roof are available. The accurate position of boundaries of the initial models is determined by the integration of the edges extracted from aerial photographs. In this process, scene constraints defined based on the initial roof models are introduced as the initial roof models are representing explicit unambiguous geometries about the scene. Experiments were conducted using the ISPRS benchmark test data. Based on test results, we show that the proposed approach can reconstruct 3D building models with higher geometrical (planimetry and vertical) and topological accuracy.
Style APA, Harvard, Vancouver, ISO itp.
25

Wallner, Johannes P., Andreas Niskanen i Matti Järvisalo. "Complexity Results and Algorithms for Extension Enforcement in Abstract Argumentation". Journal of Artificial Intelligence Research 60 (13.09.2017): 1–40. http://dx.doi.org/10.1613/jair.5415.

Pełny tekst źródła
Streszczenie:
Argumentation is an active area of modern artificial intelligence (AI) research, with connections to a range of fields, from computational complexity theory and knowledge representation and reasoning to philosophy and social sciences, as well as application-oriented work in domains such as legal reasoning, multi-agent systems, and decision support. Argumentation frameworks (AFs) of abstract argumentation have become the graph-based formal model of choice for many approaches to argumentation in AI, with semantics defining sets of jointly acceptable arguments, i.e., extensions. Understanding the dynamics of AFs has been recently recognized as an important topic in the study of argumentation in AI. In this work, we focus on the so-called extension enforcement problem in abstract argumentation as a recently proposed form of argumentation dynamics. We provide a nearly complete computational complexity map of argument-fixed extension enforcement under various major AF semantics, with results ranging from polynomial-time algorithms to completeness for the second level of the polynomial hierarchy. Complementing the complexity results, we propose algorithms for NP-hard extension enforcement based on constraint optimization under the maximum satisfiability (MaxSAT) paradigm. Going beyond NP, we propose novel MaxSAT-based counterexample-guided abstraction refinement procedures for the second-level complete problems and present empirical results on a prototype system constituting the first approach to extension enforcement in its generality.
Style APA, Harvard, Vancouver, ISO itp.
26

Saxena, Mohit Chandra, Munish Sabharwal i Preeti Bajaj. "Exploring Path Computation Techniques in Software-Defined Networking: A Review and Performance Evaluation of Centralized, Distributed, and Hybrid Approaches". International Journal on Recent and Innovation Trends in Computing and Communication 11, nr 9s (31.08.2023): 553–67. http://dx.doi.org/10.17762/ijritcc.v11i9s.7468.

Pełny tekst źródła
Streszczenie:
Software-Defined Networking (SDN) is a networking paradigm that allows network administrators to dynamically manage network traffic flows and optimize network performance. One of the key benefits of SDN is the ability to compute and direct traffic along efficient paths through the network. In recent years, researchers have proposed various SDN-based path computation techniques to improve network performance and reduce congestion. This review paper provides a comprehensive overview of SDN-based path computation techniques, including both centralized and distributed approaches. We discuss the advantages and limitations of each approach and provide a critical analysis of the existing literature. In particular, we focus on recent advances in SDN-based path computation techniques, including Dynamic Shortest Path (DSP), Distributed Flow-Aware Path Computation (DFAPC), and Hybrid Path Computation (HPC). We evaluate three SDN-based path computation algorithms: centralized, distributed, and hybrid, focusing on optimal path determination for network nodes. Test scenarios with random graph simulations are used to compare their performance. The centralized algorithm employs global network knowledge, the distributed algorithm relies on local information, and the hybrid approach combines both. Experimental results demonstrate the hybrid algorithm's superiority in minimizing path costs, striking a balance between optimization and efficiency. The centralized algorithm ranks second, while the distributed algorithm incurs higher costs due to limited local knowledge. This research offers insights into efficient path computation and informs future SDN advancements. We also discuss the challenges associated with implementing SDN-based path computation techniques, including scalability, security, and interoperability. Furthermore, we highlight the potential applications of SDN-based path computation techniques in various domains, including data center networks, wireless networks, and the Internet of Things (IoT). Finally, we conclude that SDN-based path computation techniques have the potential to significantly improvement in-order to improve network performance and reduce congestion. However, further research is needed to evaluate the effectiveness of these techniques under different network conditions and traffic patterns. With the rapid growth of SDN technology, we expect to see continued development and refinement of SDN-based path computation techniques in the future.
Style APA, Harvard, Vancouver, ISO itp.
27

Harrahill, Kieran, Áine Macken-Walsh, Eoin O’Neill i Mick Lennon. "An Analysis of Irish Dairy Farmers’ Participation in the Bioeconomy: Exploring Power and Knowledge Dynamics in a Multi-actor EIP-AGRI Operational Group". Sustainability 14, nr 19 (24.09.2022): 12098. http://dx.doi.org/10.3390/su141912098.

Pełny tekst źródła
Streszczenie:
The European Commission’s European Innovation Partnership for Agricultural Productivity and Sustainability (EIP-AGRI), part of the European Commission’s Europe 2020 strategy, aims to ‘achieve more and better from less’ by bringing together a diversity of innovation actors to harness their combined knowledges to creatively achieve sustainability goals. The creation and novel use of biomaterials remains both a significant challenge and opportunity and bringing together all the relevant actors from primary production through to refinement and processing is anticipated to make progress in bringing into practice pilot operational approaches on the ground. For the bioeconomy, a nascent sector, it is a significant challenge for it to become established; grow; innovate and engage all the relevant actors. It has been noted internationally that primary producers, among other cohorts, remain marginalised from bioeconomy activities, which significantly compromises how inclusive and innovative the bioeconomy is likely to be henceforth. In this context, an interesting case study is the Biorefinery Glas Operational Group (OG), located in Ireland. The OG was a ‘small-scale-farmer-led green biorefinery supporting farmer diversification into the circular bioeconomy’. The central research question of this paper concerns the dynamics of farmers’ participation in the OG, focusing specifically on how their knowledges shaped the operation of the OG and bioeconomy activities within it. This paper presents a social network graph illustrating the diverse actors involved in the OG, their relative degrees of connectedness to each other, and an overview of the differing levels of actors’ influence in the network. Interrogating the roles of different actors further, a lens of power theory is used to explore how farmers’ knowledges were used in combination with others’ knowledges to shape the development of the OG and innovation within it. The overall conclusion from an analysis of interviews conducted with farmer and non-farmer participants in the OG is that while farmers were highly connected with other members of the OG and viewed their involvement in the OG positively, the level of influence they had in decision-making processes in some areas of the OG was relatively limited. Different types of members of the OG tended to work in a relatively segmented way, with farmers contributing as input suppliers and on the practical side at the farm level, while other members of the OG such as scientists worked on more technical aspects. This paper concludes by providing conclusions and lessons of relevance to innovation-brokers and practitioners, and for the operation of OGs involving farmers elsewhere.
Style APA, Harvard, Vancouver, ISO itp.
28

Barrière, Caroline. "Hierarchical refinement and representation of the causal relation". Terminology 8, nr 1 (21.06.2002): 91–111. http://dx.doi.org/10.1075/term.8.1.05bar.

Pełny tekst źródła
Streszczenie:
This research looks at the complexity inherent in the causal relation and the implications for its representation in a Terminological Knowledge Base (TKB). Supported by a more general study of semantic relation hierarchies, a hierarchical refinement of the causal relation is proposed. It results from a manual search of a corpus which shows that it efficiently captures and formalizes variations expressed in text. The feasibility of determining such categorization during automatic extraction from corpora is also explored. Conceptual graphs are used as a representation formalism to which we have added certainty information to capture the degree of certainty surrounding the interaction between two terms involved in a causal relation.
Style APA, Harvard, Vancouver, ISO itp.
29

OSIPOVA, Irina. "Dynamic model construction method for coal quality control in complex-structural deposits". Sustainable Development of Mountain Territories 14, nr 4 (30.12.2022): 586–93. http://dx.doi.org/10.21177/1998-4502-2022-14-4-586-593.

Pełny tekst źródła
Streszczenie:
Introduction. Sustainable development of the coal mine is inextricably linked to quality control of the coal produced. Three main factors influence coal quality: geological, technological and industrial. The process of these factors influencing is varied: mean values of field quality indicators are generally known from exploration data, and during coal mining quality values may change. Keeping the required production balance in terms of the quality of coal produced is an immediate task of the coal-mining enterprise in terms of the strategy of complex development of solid mineral deposits. It should be noted that in scientific works concerning the quality of mined mineral, the question of the reliability and consistency of incoming geological exploration and mining-technological information and its interpretation was not considered sufficiently thoroughly. Materials and methods of research. In order to solve the existing problem of changing the quality of coal extracted, a multi-level model structure is being created, including coal quality map models, models for presenting knowledge of the coal mining process, а first approximation model for monitoring the quality of coal mined and a probability-graphical model for clarifying existing new knowledge on the uncertainty and unreliability of data presented in the form of the Bayesian network. Research results. On the basis of the created graph representing the change in the quality of coal extracted, the production problem was solved in evaluating the definition and understanding of which indicators have influenced the change in the quality of coal extracted in the form of the Bayesian network and the Algebraic Bayesian network. They are an acyclic graph with the top «Influence of major mining and technological indicators on the quality of coal extracted», which in turn determines the next level of peaks «Numerical estimation (obtained from the previous stage of model construction), allowing to determine the allowable level of extracted coal» and «Quality of mining operations» a set of these representations is fundamental for the complex indicator of quality of extracted coal. In addition to these indicators, vertices are introduced about the existence of certain deviations. As a result, the post-test probability of estimating the influence of the change in the allowable level of coal and the quality of mining operations on the complex indicator of extracted coal is P (Hi | E) = 0.025. Discussion. We can conclude that the posterior probability P (Hi | E) = 0.025 is less than a priori P (Н1) = 0.057 because it includes two basic indicators of express testing and dynamic geometry. This makes it possible to speak about the influence of newly obtained information on the complex indicator of the extracted coal, which appears as the front of exploration and exploitation works to produce real-time mining geometric models of the deposit. These, in turn, serve as the basis for operational planning of coal production in terms of quantity and quality. Conclusion. The study found that the quality of mining operations has a significant impact on the integrated quality of coal produced, which in turn forms the main indicator of dynamic geometry. The study is complicated by the fact that there is a need to establish the regularities of the spatial placement of components and their variability at the spent and developed sites of the deposit. Resume. The article presents the results of research in the form of a method of construction of dynamic model of quality management of mined coal on complex-structural deposits with consideration of existing data and knowledge uncertainties. The application of the Bayesian network has been found to be useful for a more detailed study and refinement of existing new knowledge on the process of quality management of extracted coal under conditions of uncertainty and unreliability. With the help of the Bayesian network, it has been determined that the complex quality of coal produced is influenced by the quality of mining operations and numerical estimates of the acceptable quality level of coal extracted. The study revealed that the quality of mining operations has a significant impact on the complex indicator of the quality of extracted coal, which in turn forms the main indicator of dynamic geometry. The study is complicated by the fact that there is a need to establish the regularities of the spatial placement of components and their variability at the spent and developed sites of the deposit. The results of the research may be useful in creating the next level of dynamic model, namely the development of the Bayesian network for a real-time operational quality management process for coal production, which is one of the components of sustainable development of the coal mining enterprise. Proposals for practical application and direction for future research. Further research and application of the results of the work should be continued towards the creation of the next level of dynamic model for the development of the Bayesian network for the process of operational quality management of hard-to-obtain coal structural coal deposits in real mode. Funding: The work was carried out within the framework of the State Order No. 075-00412-22 PR. Theme 1 (2022-2024). Methodological foundations of the strategy for the integrated development of reserves of solid mineral deposits in the dynamics of the development of mining systems (FUWE-2022-0005), reg. №1021062010531-8-1.5.1.
Style APA, Harvard, Vancouver, ISO itp.
30

Cai, Yuandao, i Charles Zhang. "A Cocktail Approach to Practical Call Graph Construction". Proceedings of the ACM on Programming Languages 7, OOPSLA2 (16.10.2023): 1001–33. http://dx.doi.org/10.1145/3622833.

Pełny tekst źródła
Streszczenie:
After decades of research, constructing call graphs for modern C-based software remains either imprecise or inefficient when scaling up to the ever-growing complexity. The main culprit is the difficulty of resolving function pointers, as precise pointer analyses are cubic in nature and become exponential when considering calling contexts. This paper takes a practical stance by first conducting a comprehensive empirical study of function pointer manipulations in the wild. By investigating 5355 indirect calls in five popular open-source systems, we conclude that, instead of the past uniform treatments for function pointers, a cocktail approach can be more effective in “squeezing” the number of difficult pointers to a minimum using a potpourri of cheap methods. In particular, we decompose the costs of constructing highly precise call graphs of big code by tailoring several increasingly precise algorithms and synergizing them into a concerted workflow. As a result, many indirect calls can be precisely resolved in an efficient and principled fashion, thereby reducing the final, expensive refinements. This is, in spirit, similar to the well-known cocktail medical therapy. The results are encouraging — our implemented prototype called Coral can achieve similar precision versus the previous field-, flow-, and context-sensitive Andersen-style call graph construction, yet scale up to millions of lines of code for the first time, to the best of our knowledge. Moreover, by evaluating the produced call graphs through the lens of downstream clients (i.e., use-after-free detection, thin slicing, and directed grey-box fuzzing), the results show that Coral can dramatically improve their effectiveness for better vulnerability hunting, understanding, and reproduction. More excitingly, we found twelve confirmed bugs (six impacted by indirect calls) in popular systems (e.g., MariaDB), spreading across multiple historical versions.
Style APA, Harvard, Vancouver, ISO itp.
31

Opiła, Janusz. "Role of Visualization in a Knowledge Transfer Process". Business Systems Research Journal 10, nr 1 (1.04.2019): 164–79. http://dx.doi.org/10.2478/bsrj-2019-0012.

Pełny tekst źródła
Streszczenie:
Abstract Background: Efficient management of the knowledge requires implementation of new tools and refinement of the old ones - one of them is visualization. As visualization turns out to be an efficient tool for transfer of acquired knowledge, understanding of the influence of visualization techniques on the process of knowledge sharing is a necessity. Objectives: The main objective of the paper is to deepen the understanding of the relation of visualization to other knowledge sharing paths. The supplementary goal is a discussion of constraints on visualization styles in relation to readability and efficiency. Methods/Approach: Due to the ambiguous nature of the problem, case analysis was selected as a research method. Two research papers have been selected for that. The first one focused on agrotourism, introduces a general use theoretical tool suitable for various purposes, such as consumer sentiment analysis. The second one evaluates possibilities of revealing an implicit organizational structure of an organization by means of visual analysis using interaction graphs. Results: Visualization is an important part of data analysis and knowledge transfer process. Hybrid visualization styles enhance information density but may decrease clarity. Conclusions: In order to maximise the role of visualization in a knowledg tranfer process, the special care must be devoted to clarity, the optimal level of details and information density in order to avoid obfuscation.
Style APA, Harvard, Vancouver, ISO itp.
32

Ding, Yepeng, i Hiroyuki Sato. "Formalism-Driven Development: Concepts, Taxonomy, and Practice". Applied Sciences 12, nr 7 (27.03.2022): 3415. http://dx.doi.org/10.3390/app12073415.

Pełny tekst źródła
Streszczenie:
Formal methods are crucial in program specification and verification. Instead of building cases to test functionalities, formal methods specify functionalities as properties and mathematically prove them. Nevertheless, the applicability of formal methods is limited in most development processes due to the requirement of mathematical knowledge for developers. To promote the application of formal methods, we formulate formalism-driven development (FDD), which is an iterative and incremental development process that guides developers to adopt proper formal methods throughout the whole development lifespan. In FDD, system graphs, a variant of transition systems optimized for usability, are designed to model system structures and behaviors with representative properties. System graphs are built iteratively and incrementally via refinement. Properties of system graphs are specified in propositional and temporal logics and verified by model-checking techniques with interpretation over transition system. In addition, skeleton programs are generated based on system graphs and expose implementable interfaces for executing external algorithms and emitting observable effects. Furthermore, we present Seniz, a framework that practicalizes and automates FDD. In this paper, we explicate the concepts and taxonomy of FDD and discuss its practice.
Style APA, Harvard, Vancouver, ISO itp.
33

Chebanyuk, O. V., O. V. Palahin i K. K. Markov. "Domain engineering approach of software requirements analysis". PROBLEMS IN PROGRAMMING, nr 2-3 (wrzesień 2020): 164–72. http://dx.doi.org/10.15407/pp2020.02-03.164.

Pełny tekst źródła
Streszczenie:
Requirement analysis is one of the important processes in software development lifecycle management. In Agile approach requirements software models are the basic of generating other software development artifacts. Improving requirements approaches and techniques allows avoiding mistakes in other software development artifacts. Domain engineering fundamentals is the basic for “template oriented” approaches of software development artifacts designing. Reusing domain models and knowledge allows adding details in vertical “model to model” transformation operations, refine generated software development artifacts, organize systematic software reuse and perform many other activities. Paper proposes an approach of requirement analysis based on UML Use Case diagrams transformations into communication ones and the next refinements of them by means of information from domain models. The advantages of the proposed approach is the next: proposed transformation method involves ”many to many” transformation in order to save the semantic of initial model. Domain knowledge are used to complete communication diagram by means of adding details after transformation to them. In order to perform Use case to communication transformation graph representation of software models is chosen.
Style APA, Harvard, Vancouver, ISO itp.
34

Huang, Jingxiu, Ruofei Ding, Xiaomin Wu, Shumin Chen, Jiale Zhang, Lixiang Liu i Yunxiang Zheng. "WERECE: An Unsupervised Method for Educational Concept Extraction Based on Word Embedding Refinement". Applied Sciences 13, nr 22 (14.11.2023): 12307. http://dx.doi.org/10.3390/app132212307.

Pełny tekst źródła
Streszczenie:
The era of educational big data has sparked growing interest in extracting and organizing educational concepts from massive amounts of information. Outcomes are of the utmost importance for artificial intelligence–empowered teaching and learning. Unsupervised educational concept extraction methods based on pre-trained models continue to proliferate due to ongoing advances in semantic representation. However, it remains challenging to directly apply pre-trained large language models to extract educational concepts; pre-trained models are built on extensive corpora and do not necessarily cover all subject-specific concepts. To address this gap, we propose a novel unsupervised method for educational concept extraction based on word embedding refinement (i.e., word embedding refinement–based educational concept extraction (WERECE)). It integrates a manifold learning algorithm to adapt a pre-trained model for extracting educational concepts while accounting for the geometric information in semantic computation. We further devise a discriminant function based on semantic clustering and Box–Cox transformation to enhance WERECE’s accuracy and reliability. We evaluate its performance on two newly constructed datasets, EDU-DT and EDUTECH-DT. Experimental results show that WERECE achieves an average precision up to 85.9%, recall up to 87.0%, and F1 scores up to 86.4%, which significantly outperforms baselines (TextRank, term frequency–inverse document frequency, isolation forest, K-means, and one-class support vector machine) on educational concept extraction. Notably, when WERECE is implemented with different parameter settings, its precision and recall sensitivity remain robust. WERECE also holds broad application prospects as a foundational technology, such as for building discipline-oriented knowledge graphs, enhancing learning assessment and feedback, predicting learning interests, and recommending learning resources.
Style APA, Harvard, Vancouver, ISO itp.
35

Celino, Irene, Gloria Re Calegari i Andrea Fiano. "Refining Linked Data with Games with a Purpose". Data Intelligence 2, nr 3 (lipiec 2020): 417–42. http://dx.doi.org/10.1162/dint_a_00056.

Pełny tekst źródła
Streszczenie:
With the rise of linked data and knowledge graphs, the need becomes compelling to find suitable solutions to increase the coverage and correctness of data sets, to add missing knowledge and to identify and remove errors. Several approaches – mostly relying on machine learning and natural language processing techniques – have been proposed to address this refinement goal; they usually need a partial gold standard, i.e., some “ground truth” to train automatic models. Gold standards are manually constructed, either by involving domain experts or by adopting crowdsourcing and human computation solutions. In this paper, we present an open source software framework to build Games with a Purpose for linked data refinement, i.e., Web applications to crowdsource partial ground truth, by motivating user participation through fun incentive. We detail the impact of this new resource by explaining the specific data linking “purposes” supported by the framework (creation, ranking and validation of links) and by defining the respective crowdsourcing tasks to achieve those goals. We also introduce our approach for incremental truth inference over the contributions provided by players of Games with a Purpose (also abbreviated as GWAP): we motivate the need for such a method with the specificity of GWAP vs. traditional crowdsourcing; we explain and formalize the proposed process, explain its positive consequences and illustrate the results of an experimental comparison with state-of-the-art approaches. To show this resource's versatility, we describe a set of diverse applications that we built on top of it; to demonstrate its reusability and extensibility potential, we provide references to detailed documentation, including an entire tutorial which in a few hours guides new adopters to customize and adapt the framework to a new use case.
Style APA, Harvard, Vancouver, ISO itp.
36

Sokolov, A. V., i L. A. Sokolova. "Monitoring and Forecasting the Dynamics of the Incidence of COVID-19 in Moscow: 2020–2021". Epidemiology and Vaccinal Prevention 21, nr 4 (15.09.2022): 48–59. http://dx.doi.org/10.31631/2073-3046-2022-21-4-48-59.

Pełny tekst źródła
Streszczenie:
Relevance. The accumulation of information (statistical data and knowledge) about the COVID-19 pandemic leads to the refinement of mathematical models, to the expansion of the area of their use. The aim of this study is to build a set of models (in line with current knowledge and data) to identify the functions that drive the dynamics of a pandemic and analyze the possibilities for making predictions. Materials and methods. The work used data from open statistical and information resources relating to all aspects of COVID-19. The basis of the study is the balanced identification method and the information technology of the same name, created at the Center for Distributed Computing of the Institute for Information Transmission Problems of the Russian Academy of Sciences. The technology is used to build (select) models that correspond to the quantity and quality of data, perform calculations (forecasts) and present results (all the graphs below were prepared on its basis). Result. The constructed models satisfactorily describe the dynamics of the incidence of COVID-19 in Moscow. They can be used for a forecast with a horizon of several months, provided that new, previously absent elements do not appear in the modeled object. The main internal mechanism that determines the dynamics of the model is herd immunity and an increase in the infectivity of the virus (due to the spread of Delta and Omicron strains). Conclusion. The results of the successful use of balanced identification technology for monitoring the COVID-19 pandemic are presented: models corresponding to data available at various points in time (from March 2020 to December 2021); the acquired new knowledge - functional dependencies that determine the dynamics of the system; calculations of various epidemic indicators (morbidity, immunity, reproduction indices, etc.); various forecasts for Moscow (from 12/01/2020, 04/15/2021, 08/01/2021 and 08/01/2021).
Style APA, Harvard, Vancouver, ISO itp.
37

Subagdja, Budhitama, Shanthoshigaa D, Zhaoxia Wang i Ah-Hwee Tan. "Machine Learning for Refining Knowledge Graphs: A Survey". ACM Computing Surveys, 15.01.2024. http://dx.doi.org/10.1145/3640313.

Pełny tekst źródła
Streszczenie:
Knowledge graph (KG) refinement refers to the process of filling in missing information, removing redundancies, and resolving inconsistencies in knowledge graphs. With the growing popularity of KG in various domains, many techniques involving machine learning have been applied, but there is no survey dedicated to machine learning-based KG refinement yet. Based on a novel framework following the KG refinement process, this paper presents a survey of machine learning approaches to KG refinement according to the kind of operations in KG refinement, the training datasets, mode of learning, and process multiplicity. Furthermore, the survey aims to provide broad practical insights into the development of fully automated KG refinement.
Style APA, Harvard, Vancouver, ISO itp.
38

Huseynli, Alisettar, i M. Ali Akcayol. "Continuous Knowledge Graph Refinement with Confidence Propagation". IEEE Access, 2023, 1. http://dx.doi.org/10.1109/access.2023.3283925.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Verma, Shilpa, Rajesh Bhatia, Sandeep Harit i Sanjay Batish. "Scholarly knowledge graphs through structuring scholarly communication: a review". Complex & Intelligent Systems, 9.08.2022. http://dx.doi.org/10.1007/s40747-022-00806-6.

Pełny tekst źródła
Streszczenie:
AbstractThe necessity for scholarly knowledge mining and management has grown significantly as academic literature and its linkages to authors produce enormously. Information extraction, ontology matching, and accessing academic components with relations have become more critical than ever. Therefore, with the advancement of scientific literature, scholarly knowledge graphs have become critical to various applications where semantics can impart meanings to concepts. The objective of study is to report a literature review regarding knowledge graph construction, refinement and utilization in scholarly domain. Based on scholarly literature, the study presents a complete assessment of current state-of-the-art techniques. We presented an analytical methodology to investigate the existing status of scholarly knowledge graphs (SKG) by structuring scholarly communication. This review paper investigates the field of applying machine learning, rule-based learning, and natural language processing tools and approaches to construct SKG. It further presents the review of knowledge graph utilization and refinement to provide a view of current research efforts. In addition, we offer existing applications and challenges across the board in construction, refinement and utilization collectively. This research will help to identify frontier trends of SKG which will motivate future researchers to carry forward their work.
Style APA, Harvard, Vancouver, ISO itp.
40

Zhong, Lingfeng, Jia Wu, Qian Li, Hao Peng i Xindong Wu. "A Comprehensive Survey on Automatic Knowledge Graph Construction". ACM Computing Surveys, 5.09.2023. http://dx.doi.org/10.1145/3618295.

Pełny tekst źródła
Streszczenie:
Automatic knowledge graph construction aims to manufacture structured human knowledge. To this end, much effort has historically been spent extracting informative fact patterns from different data sources. However, more recently, research interest has shifted to acquiring conceptualized structured knowledge beyond informative data. In addition, researchers have also been exploring new ways of handling sophisticated construction tasks in diversified scenarios. Thus, there is a demand for a systematic review of paradigms to organize knowledge structures beyond data-level mentions. To meet this demand, we comprehensively survey more than 300 methods to summarize the latest developments in knowledge graph construction. A knowledge graph is built in three steps: knowledge acquisition, knowledge refinement, and knowledge evolution. The processes of knowledge acquisition are reviewed in detail, including obtaining entities with fine-grained types and their conceptual linkages to knowledge graphs; resolving coreferences; and extracting entity relationships in complex scenarios. The survey covers models for knowledge refinement, including knowledge graph completion, and knowledge fusion. Methods to handle knowledge evolution are also systematically presented, including condition knowledge acquisition, condition knowledge graph completion, and knowledge dynamic. We present the paradigms to compare the distinction among these methods along the axis of the data environment, motivation, and architecture. Additionally, we also provide briefs on accessible resources that can help readers to develop practical knowledge graph systems. The survey concludes with discussions on the challenges and possible directions for future exploration.
Style APA, Harvard, Vancouver, ISO itp.
41

Nayantara Jeyaraj, Manuela, Srinath Perera, Malith Jayasinghe i Nadheesh Jihan. "Probabilistic Error Detection Model for Knowledge Graph Refinement". Computación y Sistemas 26, nr 3 (5.09.2022). http://dx.doi.org/10.13053/cys-26-3-4346.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Kim, Haklae. "A knowledge graph of interlinking digital records: the case of the 1997 Korean financial crisis". Electronic Library, 3.10.2023. http://dx.doi.org/10.1108/el-05-2023-0131.

Pełny tekst źródła
Streszczenie:
Purpose Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a knowledge graph that depicts the diverse relationships between heterogeneous digital archive entities. Design/methodology/approach This study introduces and describes a method for applying knowledge graphs to digital archives in a step-by-step manner. It examines archival metadata standards, such as Records in Context Ontology (RiC-O), for characterising digital records; explains the process of data refinement, enrichment and reconciliation with examples; and demonstrates the use of knowledge graphs constructed using semantic queries. Findings This study introduced the 97imf.kr archive as a knowledge graph, enabling meaningful exploration of relationships within the archive’s records. This approach facilitated comprehensive record descriptions about different record entities. Applying archival ontologies with general-purpose vocabularies to digital records was advised to enhance metadata coherence and semantic search. Originality/value Most digital archives serviced in Korea are limited in the proper use of archival metadata standards. The contribution of this study is to propose a practical application of knowledge graph technology for linking and exploring digital records. This study details the process of collecting raw data on archives, data preprocessing and data enrichment, and demonstrates how to build a knowledge graph connected to external data. In particular, the knowledge graph of RiC-O vocabulary, Wikidata and Schema.org vocabulary and the semantic query using it can be applied to supplement keyword search in conventional digital archives.
Style APA, Harvard, Vancouver, ISO itp.
43

Fenoglio, Enzo, Emre Kazim, Hugo Latapie i Adriano Koshiyama. "Tacit knowledge elicitation process for industry 4.0". Discover Artificial Intelligence 2, nr 1 (10.03.2022). http://dx.doi.org/10.1007/s44163-022-00020-w.

Pełny tekst źródła
Streszczenie:
AbstractManufacturers migrate their processes to Industry 4.0, which includes new technologies for improving productivity and efficiency of operations. One of the issues is capturing, recreating, and documenting the tacit knowledge of the aging workers. However, there are no systematic procedures to incorporate this knowledge into Enterprise Resource Planning systems and maintain a competitive advantage. This paper describes a solution proposal for a tacit knowledge elicitation process for capturing operational best practices of experienced workers in industrial domains based on a mix of algorithmic techniques and a cooperative game. We use domain ontologies for Industry 4.0 and reasoning techniques to discover and integrate new facts from textual sources into an Operational Knowledge Graph. We describe a concepts formation iterative process in a role game played by human and virtual agents through socialization and externalization for knowledge graph refinement. Ethical and societal concerns are discussed as well.
Style APA, Harvard, Vancouver, ISO itp.
44

Babalou, Samira, David Schellenberger Costa, Helge Bruelheide, Jens Kattge, Christine Römermann, Christian Wirth i Birgitta König-Ries. "iKNOW: A platform for knowledge graph construction for biodiversity". Biodiversity Information Science and Standards 6 (23.08.2022). http://dx.doi.org/10.3897/biss.6.93867.

Pełny tekst źródła
Streszczenie:
Nowadays, more and more biodiversity datasets containing observational and experimental data are collected and produced by different projects. In order to answer the fundamental questions of biodiversity research, these data need to be integrated for joint analyses. However, to date, too often, these data remain isolated in silos. Both in academia and industry, Knowledge Graphs (KGs) are widely regarded as a promising approach to overcome issues of data silos and lack of common understanding of data (Fensel and Şimşek 2020). KGs are graph-structured knowledge bases that store factual information in the form of structured relationships between entities, like “tree_species has_trait average_SLA” or “nutans is_observed_in SCH_Location" (Hogan et al. 2021). In our context, entities could be, e.g., abstract concepts like a kingdom, a species, or a trait, or a concrete specimen of a species. Example relationships could be "co-occurs" or, "possesses-trait". KGs for biodiversity have been proposed by Page 2019 and have also been the topic at prior TDWG conferences *1 (Page 2021). However, to date, uptake of this concept in the community has been rather slow (Sachs et al. 2019). We argue that this is at least partially due to the high effort and expertise required in developing and managing such KGs. Therefore, in our ongoing project, iKNOW (Babalou et al. 2021), we aim to provide a toolbox for reproducible KG creation. While iKNOW is still in an early stage, we aim to make this platform open-source and freely available to the biodiversity community. Thus, it can significantly contribute to making biodiversity data widely available, easily discoverable, and integratable. For now, we focus on tabular datasets resulting from biodiversity observation or sampling events or experiments. Given such a dataset, iKNOW will support its transformation into (subject, predicate, object) triples in the RDF standard (Resource Description Framework). Every uploaded dataset will be considered as a subgraph of the main KG in iKNOW. If required, data can be cleaned. After that, the entities and relationships among them should be extracted. For that, a user will be able select one of the existing semi-automatic tools available on our platform (e.g., JenTab (Abdelmageed and Schindler 2020)). The entities in this step can be linked to respective global identifiers in Wikidata, GBIF, the Global Biodiversity Information Facility, or any other user-selected knowledge resource. In the next step, (subject, predicate, object) triples based on the extracted information from the previous steps will be created. After these processes, the generated sub-KG can be used directly. However, one can take further steps such as: Triple Augmentation (generate new triples and extra relations to ease KG completion), Schema Refinement (refine the schema, e.g., via logical reasoning for the KG completion and correctness), Quality Checking (check the quality of the generated sub-KG), and Query Building (create customized SPARQL queries for the generated KG). iKNOW will include a wide range of functionalities for creating, accessing, querying, visualizing, updating, reproducing, and tracking the provenance of KGs. The reproducibility of such a creation is essential to strengthening the establishment of open science practices in the biodiversity domain. Thus, all information regarding the user-selected tools with parameters and settings, along with the initial dataset and intermediate results, will be saved in every step of our platform. With the help of this, users can redo the previous steps. Moreover, this enables us to track the provenance of the created KG. The iKNOW project is a joint effort by computer scientists and domain experts from the German Centre for Integrative Biodiversity Research (iDiv). As a showcase, we aim to create a KG of plant-related data sources at iDiv. These include, among others: TRY (the plant trait database) (Kattge and DÍaz 2011), sPlot (the database about global patterns of taxonomic, functional, and phylogenetic diversity) (Bruelheide and Dengler 2019), and PhenObs (the dataset of the global network of botanical gardens monitoring the impacts of climate change on the phenology of herbaceous plant species) (Nordt and Hensen 2021), LCVP, the Leipzig Catalogue of Vascular Plants, (Freiberg and Winter 2020), and many others. The resulting KG will serve as a discovery tool for biodiversity data and provide a robust infrastructure for managing biodiversity knowledge. From the biodiversity research perspective, iKNOW will contribute to creating a dataset following the Linked Open Data principles by interlinking to cross-domain and specific-domain KGs. From the computer science perspective, iKNOW will contribute to developing tools for dynamic, low-effort creation of reproducible knowledge graphs.
Style APA, Harvard, Vancouver, ISO itp.
45

Xiao, Meng, Dongjie Wang, Min Wu, Kunpeng Liu, Hui Xiong, Yuanchun Zhou i Yanjie Fu. "Traceable Group-Wise Self-Optimizing Feature Transformation Learning: A Dual Optimization Perspective". ACM Transactions on Knowledge Discovery from Data, 20.12.2023. http://dx.doi.org/10.1145/3638059.

Pełny tekst źródła
Streszczenie:
Feature transformation aims to reconstruct an effective representation space by mathematically refining the existing features. It serves as a pivotal approach to combat the curse of dimensionality, enhance model generalization, mitigate data sparsity, and extend the applicability of classical models. Existing research predominantly focuses on domain knowledge-based feature engineering or learning latent representations. However, these methods, while insightful, lack full automation and fail to yield a traceable and optimal representation space. An indispensable question arises: Can we concurrently address these limitations when reconstructing a feature space for a machine learning task? Our initial work took a pioneering step towards this challenge by introducing a novel self-optimizing framework. This framework leverages the power of three cascading reinforced agents to automatically select candidate features and operations for generating improved feature transformation combinations. Despite the impressive strides made, there was room for enhancing its effectiveness and generalization capability. In this extended journal version, we advance our initial work from two distinct yet interconnected perspectives: 1) We propose a refinement of the original framework, which integrates a graph-based state representation method to capture the feature interactions more effectively and develop different Q-learning strategies to alleviate Q-value overestimation further. 2) We utilize a new optimization technique (actor-critic) to train the entire self-optimizing framework in order to accelerate the model convergence and improve the feature transformation performance. Finally, to validate the improved effectiveness and generalization capability of our framework, we perform extensive experiments and conduct comprehensive analyses. These provide empirical evidence of the strides made in this journal version over the initial work, solidifying our framework’s standing as a substantial contribution to the field of automated feature transformation. To improve the reproducibility, we have released the associated code and data by the Github link https://github.com/coco11563/TKDD2023_code.
Style APA, Harvard, Vancouver, ISO itp.
46

"Large-Scale Knowledge Synthesis and Complex Information Retrieval from Biomedical Documents". Journal of Anesthesia & Pain Medicine 8, nr 3 (16.05.2023). http://dx.doi.org/10.33140/japm.08.03.03.

Pełny tekst źródła
Streszczenie:
Recent advances in the healthcare industry have led to an abundance of unstructured data, making it challenging to perform tasks such as efficient and accurate information retrieval at scale. Our work offers an all-in-one scalable solution for extracting and exploring complex information from large-scale research documents, which would otherwise be tedious. First, we briefly explain our knowledge synthesis process to extract helpful information from unstructured text data of research documents. Then, on top of the knowledge extracted from the documents, we perform complex information retrieval using three major components- Paragraph Retrieval, Triplet Retrieval from Knowledge Graphs, and Complex Question Answering (QA). These components combine lexical and semantic-based methods to retrieve paragraphs and triplets and perform faceted refinement for filtering these search results. The complexity of biomedical queries and documents necessitates using a QA system capable of handling queries more complex than factoid queries, which we evaluate qualitatively on the COVID-19 Open Research Dataset (CORD-19) to demonstrate the effectiveness and valueadd.
Style APA, Harvard, Vancouver, ISO itp.
47

Angelis, Sotiris, Efthymia Moraitou, George Caridakis i Konstantinos Kotis. "CHEKG: a collaborative and hybrid methodology for engineering modular and fair domain-specific knowledge graphs". Knowledge and Information Systems, 20.04.2024. http://dx.doi.org/10.1007/s10115-024-02110-w.

Pełny tekst źródła
Streszczenie:
AbstractOntologies constitute the semantic model of Knowledge Graphs (KGs). This structural association indicates the potential existence of methodological analogies in the development of ontologies and KGs. The deployment of fully and well-defined methodologies for KG development based on existing ontology engineering methodologies (OEMs) has been suggested and efficiently applied. However, most of the modern/recent OEMs may not include tasks that (i) empower knowledge workers and domain experts to closely collaborate with ontology engineers and KG specialists for the development and maintenance of KGs, (ii) satisfy special requirements of KG development, such as (a) ensuring modularity and agility of KGs, (b) assessing and mitigating bias at schema and data levels. Toward this aim, the paper presents a methodology for the Collaborative and Hybrid Engineering of Knowledge Graphs (CHEKG), which constitutes a hybrid (schema-centric/top-down and data-driven/bottom-up), collaborative, agile, and iterative approach for developing modular and fair domain-specific KGs. CHEKG contributes to all phases of the KG engineering lifecycle: from the specification of a KG to its exploitation, evaluation, and refinement. The CHEKG methodology is based on the main phases of the extended Human-Centered Collaborative Ontology Engineering Methodology (ext-HCOME), while it adjusts and expands the individual processes and tasks of each phase according to the specialized requirements of KG development. Apart from the presentation of the methodology per se, the paper presents recent work regarding the deployment and evaluation of the CHEKG methodology for the engineering of semantic trajectories as KGs generated from unmanned aerial vehicles (UAVs) data during real cultural heritage documentation scenarios.
Style APA, Harvard, Vancouver, ISO itp.
48

Kumar, Sumit, Leela Venkata Lokesh Billa, Ojaswi Bhimineni, Dr Rachana Jaiswal i Aadarsh Bisht. "Implementation of ICT Tools in a Vocational Skill Development Program". International Journal of Next-Generation Computing, 31.10.2022. http://dx.doi.org/10.47164/ijngc.v13i3.729.

Pełny tekst źródła
Streszczenie:
Introduction - Skills and knowledge are the driving forces behind every country's economic and social progress. In the digital era, ICT materials such as PPT, Tutorial Videos, Animation, e-materials, and web resources are quite useful in the education sector for improved comprehension. Aim of the study - To detect ICT skills among teacher who teach vocational skills in educational institutions. To determine the techniques and ICT technologies that will be used in the institutions to provide vocational skills. To get a better understanding of the issues that vocational skill providers confront, as well as the improvements and refinements that are necessary for the current system. Research Methodology - The sample was chosen using a multistage random sampling process. This research looked at all of the skill-training institutes and centres in Dindigul's Taluks. For this study, a total of 250 vocational skill-providing teachers were chosen. The statistics package for social science (SPSS) was used to analyse the data. Data analysis – The data collected have been analysed using frequency, percentage, graphs and statistics package for social science (SPSS) was used to analyse the data. Conclusion - It is concluded that, 50.8 percent of respondents believe there is a poor amount of ICT-based reference resources for soft skill training. Simultaneously, the majority of respondents (88%) expressed an interest in receiving training in ICT-based occupational skill instruction. As a result, all occupational skill providers should get intensive ICT-based teaching and learning training. Institutions must create a suitable environment for ICT-based education.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii