Siga este enlace para ver otros tipos de publicaciones sobre el tema: Data Domains.

Artículos de revistas sobre el tema "Data Domains"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Data Domains".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Macak, Martin, Mouzhi Ge y Barbora Buhnova. "A Cross-Domain Comparative Study of Big Data Architectures". International Journal of Cooperative Information Systems 29, n.º 04 (28 de octubre de 2020): 2030001. http://dx.doi.org/10.1142/s0218843020300016.

Texto completo
Resumen
Nowadays, a variety of Big Data architectures are emerging to organize the Big Data life cycle. While some of these architectures are proposed for general usage, many of them are proposed in a specific application domain such as smart cities, transportation, healthcare, and agriculture. There is, however, a lack of understanding of how and why Big Data architectures vary in different domains and how the Big Data architecture strategy in one domain may possibly advance other domains. Therefore, this paper surveys and compares the Big Data architectures in different application domains. It also chooses a representative architecture of each researched application domain to indicate which Big Data architecture from a given domain the researchers and practitioners may possibly start from. Next, a pairwise cross-domain comparison among the Big Data architectures is presented to outline the similarities and differences between the domain-specific architectures. Finally, the paper provides a set of practical guidelines for Big Data researchers and practitioners to build and improve Big Data architectures based on the knowledge gathered in this study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Rui, Xue, Ziqiang Li, Yang Cao, Ziyang Li y Weiguo Song. "DILRS: Domain-Incremental Learning for Semantic Segmentation in Multi-Source Remote Sensing Data". Remote Sensing 15, n.º 10 (12 de mayo de 2023): 2541. http://dx.doi.org/10.3390/rs15102541.

Texto completo
Resumen
With the exponential growth in the speed and volume of remote sensing data, deep learning models are expected to adapt and continually learn over time. Unfortunately, the domain shift between multi-source remote sensing data from various sensors and regions poses a significant challenge. Segmentation models face difficulty in adapting to incremental domains due to catastrophic forgetting, which can be addressed via incremental learning methods. However, current incremental learning methods mainly focus on class-incremental learning, wherein classes belong to the same remote sensing domain, and neglect investigations into incremental domains in remote sensing. To solve this problem, we propose a domain-incremental learning method for semantic segmentation in multi-source remote sensing data. Specifically, our model aims to incrementally learn a new domain while preserving its performance on previous domains without accessing previous domain data. To achieve this, our model has a unique parameter learning structure that reparametrizes domain-agnostic and domain-specific parameters. We use different optimization strategies to adapt to domain shift in incremental domain learning. Additionally, we adopt multi-level knowledge distillation loss to mitigate the impact of label space shift among domains. The experiments demonstrate that our method achieves excellent performance in domain-incremental settings, outperforming existing methods with only a few parameters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Son, Jiseong, Chul-Su Lim, Hyoung-Seop Shim y Ji-Sun Kang. "Development of Knowledge Graph for Data Management Related to Flooding Disasters Using Open Data". Future Internet 13, n.º 5 (11 de mayo de 2021): 124. http://dx.doi.org/10.3390/fi13050124.

Texto completo
Resumen
Despite the development of various technologies and systems using artificial intelligence (AI) to solve problems related to disasters, difficult challenges are still being encountered. Data are the foundation to solving diverse disaster problems using AI, big data analysis, and so on. Therefore, we must focus on these various data. Disaster data depend on the domain by disaster type and include heterogeneous data and lack interoperability. In particular, in the case of open data related to disasters, there are several issues, where the source and format of data are different because various data are collected by different organizations. Moreover, the vocabularies used for each domain are inconsistent. This study proposes a knowledge graph to resolve the heterogeneity among various disaster data and provide interoperability among domains. Among disaster domains, we describe the knowledge graph for flooding disasters using Korean open datasets and cross-domain knowledge graphs. Furthermore, the proposed knowledge graph is used to assist, solve, and manage disaster problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Crooks, Natacha. "Efficient Data Sharing across Trust Domains". ACM SIGMOD Record 52, n.º 2 (10 de agosto de 2023): 36–37. http://dx.doi.org/10.1145/3615952.3615962.

Texto completo
Resumen
Cross-Trust-Domain Processing. Data is now a commodity. We know how to compute and store it efficiently and reliably at scale. We have, however, paid less attention to the notion of trust. Yet, data owners today are no longer the entities storing or processing their data (medical records are stored on the cloud, data is shared across banks, etc.). In fact, distributed systems today consist of many different parties, whether it is cloud providers, jurisdictions, organisations or humans. Modern data processing and storage always straddles trust domains.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Jeon, Hyunsik, Seongmin Lee y U. Kang. "Unsupervised multi-source domain adaptation with no observable source data". PLOS ONE 16, n.º 7 (9 de julio de 2021): e0253415. http://dx.doi.org/10.1371/journal.pone.0253415.

Texto completo
Resumen
Given trained models from multiple source domains, how can we predict the labels of unlabeled data in a target domain? Unsupervised multi-source domain adaptation (UMDA) aims for predicting the labels of unlabeled target data by transferring the knowledge of multiple source domains. UMDA is a crucial problem in many real-world scenarios where no labeled target data are available. Previous approaches in UMDA assume that data are observable over all domains. However, source data are not easily accessible due to privacy or confidentiality issues in a lot of practical scenarios, although classifiers learned in source domains are readily available. In this work, we target data-free UMDA where source data are not observable at all, a novel problem that has not been studied before despite being very realistic and crucial. To solve data-free UMDA, we propose DEMS (Data-free Exploitation of Multiple Sources), a novel architecture that adapts target data to source domains without exploiting any source data, and estimates the target labels by exploiting pre-trained source classifiers. Extensive experiments for data-free UMDA on real-world datasets show that DEMS provides the state-of-the-art accuracy which is up to 27.5% point higher than that of the best baseline.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kang, Byung Ok, Hyeong Bae Jeon y Jeon Gue Park. "Speech Recognition for Task Domains with Sparse Matched Training Data". Applied Sciences 10, n.º 18 (4 de septiembre de 2020): 6155. http://dx.doi.org/10.3390/app10186155.

Texto completo
Resumen
We propose two approaches to handle speech recognition for task domains with sparse matched training data. One is an active learning method that selects training data for the target domain from another general domain that already has a significant amount of labeled speech data. This method uses attribute-disentangled latent variables. For the active learning process, we designed an integrated system consisting of a variational autoencoder with an encoder that infers latent variables with disentangled attributes from the input speech, and a classifier that selects training data with attributes matching the target domain. The other method combines data augmentation methods for generating matched target domain speech data and transfer learning methods based on teacher/student learning. To evaluate the proposed method, we experimented with various task domains with sparse matched training data. The experimental results show that the proposed method has qualitative characteristics that are suitable for the desired purpose, it outperforms random selection, and is comparable to using an equal amount of additional target domain data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Li, Rumeng, Xun Wang y Hong Yu. "MetaMT, a Meta Learning Method Leveraging Multiple Domain Data for Low Resource Machine Translation". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8245–52. http://dx.doi.org/10.1609/aaai.v34i05.6339.

Texto completo
Resumen
Neural machine translation (NMT) models have achieved state-of-the-art translation quality with a large quantity of parallel corpora available. However, their performance suffers significantly when it comes to domain-specific translations, in which training data are usually scarce. In this paper, we present a novel NMT model with a new word embedding transition technique for fast domain adaption. We propose to split parameters in the model into two groups: model parameters and meta parameters. The former are used to model the translation while the latter are used to adjust the representational space to generalize the model to different domains. We mimic the domain adaptation of the machine translation model to low-resource domains using multiple translation tasks on different domains. A new training strategy based on meta-learning is developed along with the proposed model to update the model parameters and meta parameters alternately. Experiments on datasets of different domains showed substantial improvements of NMT performances on a limited amount of data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Silva, Amila, Ling Luo, Shanika Karunasekera y Christopher Leckie. "Embracing Domain Differences in Fake News: Cross-domain Fake News Detection using Multi-modal Data". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 1 (18 de mayo de 2021): 557–65. http://dx.doi.org/10.1609/aaai.v35i1.16134.

Texto completo
Resumen
With the rapid evolution of social media, fake news has become a significant social problem, which cannot be addressed in a timely manner using manual investigation. This has motivated numerous studies on automating fake news detection. Most studies explore supervised training models with different modalities (e.g., text, images, and propagation networks) of news records to identify fake news. However, the performance of such techniques generally drops if news records are coming from different domains (e.g., politics, entertainment), especially for domains that are unseen or rarely-seen during training. As motivation, we empirically show that news records from different domains have significantly different word usage and propagation patterns. Furthermore, due to the sheer volume of unlabelled news records, it is challenging to select news records for manual labelling so that the domain-coverage of the labelled dataset is maximised. Hence, this work: (1) proposes a novel framework that jointly preserves domain-specific and cross-domain knowledge in news records to detect fake news from different domains; and (2) introduces an unsupervised technique to select a set of unlabelled informative news records for manual labelling, which can be ultimately used to train a fake news detection model that performs well for many domains while minimizing the labelling cost. Our experiments show that the integration of the proposed fake news model and the selective annotation approach achieves state-of-the-art performance for cross-domain news datasets, while yielding notable improvements for rarely-appearing domains in news datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Darch, Peter T. y Christine L. Borgman. "Ship space to database: emerging infrastructures for studies of the deep subseafloor biosphere". PeerJ Computer Science 2 (14 de noviembre de 2016): e97. http://dx.doi.org/10.7717/peerj-cs.97.

Texto completo
Resumen
BackgroundAn increasing array of scientific fields face a “data deluge.” However, in many fields data are scarce, with implications for their epistemic status and ability to command funding. Consequently, they often attempt to develop infrastructure for data production, management, curation, and circulation. A component of a knowledge infrastructure may serve one or more scientific domains. Further, a single domain may rely upon multiple infrastructures simultaneously. Studying how domains negotiate building and accessing scarce infrastructural resources that they share with other domains will shed light on how knowledge infrastructures shape science.MethodsWe conducted an eighteen-month, qualitative study of scientists studying the deep subseafloor biosphere, focusing on the Center for Dark Energy Biosphere Investigations (C-DEBI) and the Integrated Ocean Drilling Program (IODP) and its successor, the International Ocean Discovery Program (IODP2). Our methods comprised ethnographic observation, including eight months embedded in a laboratory, interviews (n = 49), and document analysis.ResultsDeep subseafloor biosphere research is an emergent domain. We identified two reasons for the domain’s concern with data scarcity: limited ability to pursue their research objectives, and the epistemic status of their research. Domain researchers adopted complementary strategies to acquire more data. One was to establish C-DEBI as an infrastructure solely for their domain. The second was to use C-DEBI as a means to gain greater access to, and reconfigure, IODP/IODP2 to their advantage. IODP/IODP2 functions as infrastructure for multiple scientific domains, which creates competition for resources. C-DEBI is building its own data management infrastructure, both to acquire more data from IODP and to make better use of data, once acquired.DiscussionTwo themes emerge. One is data scarcity, which can be understood only in relation to a domain’s objectives. To justify support for public funding, domains must demonstrate their utility to questions of societal concern or existential questions about humanity. The deep subseafloor biosphere domain aspires to address these questions in a more statistically intensive manner than is afforded by the data to which it currently has access. The second theme is the politics of knowledge infrastructures. A single scientific domain may build infrastructure for itself and negotiate access to multi-domain infrastructure simultaneously. C-DEBI infrastructure was designed both as a response to scarce IODP/IODP2 resources, and to configure the data allocation processes of IODP/IODP2 in their favor.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Nakahira, Katsuko T., Yoshiki Mikami, Hiroyuki Namba, Minehiro Takeshita y Shigeaki Kodama. "Country domain governance: an analysis by data-mining of country domains". Artificial Life and Robotics 16, n.º 3 (diciembre de 2011): 311–14. http://dx.doi.org/10.1007/s10015-011-0937-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Zhao, Liang. "Event Prediction in the Big Data Era". ACM Computing Surveys 54, n.º 5 (junio de 2021): 1–37. http://dx.doi.org/10.1145/3450287.

Texto completo
Resumen
Events are occurrences in specific locations, time, and semantics that nontrivially impact either our society or the nature, such as earthquakes, civil unrest, system failures, pandemics, and crimes. It is highly desirable to be able to anticipate the occurrence of such events in advance to reduce the potential social upheaval and damage caused. Event prediction, which has traditionally been prohibitively challenging, is now becoming a viable option in the big data era and is thus experiencing rapid growth, also thanks to advances in high performance computers and new Artificial Intelligence techniques. There is a large amount of existing work that focuses on addressing the challenges involved, including heterogeneous multi-faceted outputs, complex (e.g., spatial, temporal, and semantic) dependencies, and streaming data feeds. Due to the strong interdisciplinary nature of event prediction problems, most existing event prediction methods were initially designed to deal with specific application domains, though the techniques and evaluation procedures utilized are usually generalizable across different domains. However, it is imperative yet difficult to cross-reference the techniques across different domains, given the absence of a comprehensive literature survey for event prediction. This article aims to provide a systematic and comprehensive survey of the technologies, applications, and evaluations of event prediction in the big data era. First, systematic categorization and summary of existing techniques are presented, which facilitate domain experts’ searches for suitable techniques and help model developers consolidate their research at the frontiers. Then, comprehensive categorization and summary of major application domains are provided to introduce wider applications to model developers to help them expand the impacts of their research. Evaluation metrics and procedures are summarized and standardized to unify the understanding of model performance among stakeholders, model developers, and domain experts in various application domains. Finally, open problems and future directions are discussed. Additional resources related to event prediction are included in the paper website: http://cs.emory.edu/∼lzhao41/projects/event_prediction_site.html.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Cheng, Hua, Renjie Yu, Yixin Tang, Yiquan Fang y Tao Cheng. "Text Classification Model Enhanced by Unlabeled Data for LaTeX Formula". Applied Sciences 11, n.º 22 (9 de noviembre de 2021): 10536. http://dx.doi.org/10.3390/app112210536.

Texto completo
Resumen
Generic language models pretrained on large unspecific domains are currently the foundation of NLP. Labeled data are limited in most model training due to the cost of manual annotation, especially in domains including massive Proper Nouns such as mathematics and biology, where it affects the accuracy and robustness of model prediction. However, directly applying a generic language model on a specific domain does not work well. This paper introduces a BERT-based text classification model enhanced by unlabeled data (UL-BERT) in the LaTeX formula domain. A two-stage Pretraining model based on BERT(TP-BERT) is pretrained by unlabeled data in the LaTeX formula domain. A double-prediction pseudo-labeling (DPP) method is introduced to obtain high confidence pseudo-labels for unlabeled data by self-training. Moreover, a multi-rounds teacher–student model training approach is proposed for UL-BERT model training with few labeled data and more unlabeled data with pseudo-labels. Experiments on the classification of the LaTex formula domain show that the classification accuracies have been significantly improved by UL-BERT where the F1 score has been mostly enhanced by 2.76%, and lower resources are needed in model training. It is concluded that our method may be applicable to other specific domains with enormous unlabeled data and limited labelled data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Hu, Chengyang, Ke-Yue Zhang, Taiping Yao, Shice Liu, Shouhong Ding, Xin Tan y Lizhuang Ma. "Domain-Hallucinated Updating for Multi-Domain Face Anti-spoofing". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de marzo de 2024): 2193–201. http://dx.doi.org/10.1609/aaai.v38i3.27992.

Texto completo
Resumen
Multi-Domain Face Anti-Spoofing (MD-FAS) is a practical setting that aims to update models on new domains using only novel data while ensuring that the knowledge acquired from previous domains is not forgotten. Prior methods utilize the responses from models to represent the previous domain knowledge or map the different domains into separated feature spaces to prevent forgetting. However, due to domain gaps, the responses of new data are not as accurate as those of previous data. Also, without the supervision of previous data, separated feature spaces might be destroyed by new domains while updating, leading to catastrophic forgetting. Inspired by the challenges posed by the lack of previous data, we solve this issue from a new standpoint that generates hallucinated previous data for updating FAS model. To this end, we propose a novel Domain-Hallucinated Updating (DHU) framework to facilitate the hallucination of data. Specifically, Domain Information Explorer learns representative domain information of the previous domains. Then, Domain Information Hallucination module transfers the new domain data to pseudo-previous domain ones. Moreover, Hallucinated Features Joint Learning module is proposed to asymmetrically align the new and pseudo-previous data for real samples via dual levels to learn more generalized features, promoting the results on all domains. Our experimental results and visualizations demonstrate that the proposed method outperforms state-of-the-art competitors in terms of effectiveness.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Zhang, Chen, Luis Fernando D'Haro, Thomas Friedrichs y Haizhou Li. "MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junio de 2022): 11657–66. http://dx.doi.org/10.1609/aaai.v36i10.21420.

Texto completo
Resumen
Chatbots are designed to carry out human-like conversations across different domains, such as general chit-chat, knowledge exchange, and persona-grounded conversations. To measure the quality of such conversational agents, a dialogue evaluator is expected to conduct assessment across domains as well. However, most of the state-of-the-art automatic dialogue evaluation metrics (ADMs) are not designed for multi-domain evaluation. We are motivated to design a general and robust framework, MDD-Eval, to address the problem. Specifically, we first train a teacher evaluator with human-annotated data to acquire a rating skill to tell good dialogue responses from bad ones in a particular domain and then, adopt a self-training strategy to train a new evaluator with teacher-annotated multi-domain data, that helps the new evaluator to generalize across multiple domains. MDD-Eval is extensively assessed on six dialogue evaluation benchmarks. Empirical results show that the MDD-Eval framework achieves a strong performance with an absolute improvement of 7% over the state-of-the-art ADMs in terms of mean Spearman correlation scores across all the evaluation benchmarks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Martin, Tina, Konstantin Titov, Andrey Tarasov y Andreas Weller. "Spectral induced polarization: frequency domain versus time domain laboratory data". Geophysical Journal International 225, n.º 3 (19 de febrero de 2021): 1982–2000. http://dx.doi.org/10.1093/gji/ggab071.

Texto completo
Resumen
SUMMARY Spectral information obtained from induced polarization (IP) measurements can be used in a variety of applications and is often gathered in frequency domain (FD) at the laboratory scale. In contrast, field IP measurements are mostly done in time domain (TD). Theoretically, the spectral content from both domains should be similar. In practice, they are often different, mainly due to instrumental restrictions as well as the limited time and frequency range of measurements. Therefore, a possibility of transition between both domains, in particular for the comparison of laboratory FD IP data and field TD IP results, would be very favourable. To compare both domains, we conducted laboratory IP experiments in both TD and FD. We started with three numerical models and measurements at a test circuit, followed by several investigations for different wood and sandstone samples. Our results demonstrate that the differential polarizability (DP), which is calculated from the TD decay curves, can be compared very well with the phase of the complex electrical resistivity. Thus, DP can be used for a first visual comparison of FD and TD data, which also enables a fast discrimination between different samples. Furthermore, to compare both domains qualitatively, we calculated the relaxation time distribution (RTD) for all data. The results are mostly in agreement between both domains, however, depending on the TD data quality. It is striking that the DP and RTD results are in better agreement for higher data quality in TD. Nevertheless, we demonstrate that IP laboratory measurements can be carried out in both TD and FD with almost equivalent results. The RTD enables a good comparability of FD IP laboratory data with TD IP field data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Feng, Lingyun, Minghui Qiu, Yaliang Li, Hai-Tao Zheng y Ying Shen. "Learning to Augment for Data-scarce Domain BERT Knowledge Distillation". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 8 (18 de mayo de 2021): 7422–30. http://dx.doi.org/10.1609/aaai.v35i8.16910.

Texto completo
Resumen
Despite pre-trained language models such as BERT have achieved appealing performance in a wide range of Natural Language Processing (NLP) tasks, they are computationally expensive to be deployed in real-time applications. A typical method is to adopt knowledge distillation to compress these large pre-trained models (teacher models) to small student models. However, for a target domain with scarce training data, the teacher can hardly pass useful knowledge to the student, which yields performance degradation for the student models. To tackle this problem, we propose a method to learn to augment data for BERT Knowledge Distillation in target domains with scarce labeled data, by learning a cross-domain manipulation scheme that automatically augments the target domain with the help of resource-rich source domains. Specifically, the proposed method generates samples acquired from a stationary distribution near the target data and adopts a reinforced controller to automatically refine the augmentation strategy according to the performance of the student. Extensive experiments demonstrate that the proposed method significantly outperforms state-of-the-art baselines on different NLP tasks, and for the data-scarce domains, the compressed student models even perform better than the original large teacher model, with much fewer parameters (only ~13.3%) when only a few labeled examples available.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Rudikova, L. V. y E. V. Zhavnerko. "ABOUT DATA MODELING SUBJECT DOMAINS PRACTICE-ORIENTED DIRECTION FOR UNIVERSAL SYSTEM OF STORAGE AND PROCESSING DATA". «System analysis and applied information science», n.º 3 (2 de noviembre de 2017): 4–12. http://dx.doi.org/10.21122/2309-4923-2017-3-4-12.

Texto completo
Resumen
This article describes data modeling for practice-oriented subject domains they are basis of general data model for data warehouse creation. Describes short subject domains characteristic relationship to different types of any human activities at the current time. Offered appropriate data models, considered relationship between them as data processing and data warehouse creation, which can be built on information data storage technology and which has some characteristics as extensible complex subject domain, data integration, which get from any data sources, data time invariance with required temporal marks, relatively high data stability, search necessary compromises in data redundancy, system blocks modularity, flexibility and extensibility of architecture, high requirements to data storage security. It’s proposed general approach of data collection and data storage, appropriate data models, in the future, will integrate in one database scheme and create generalized scheme of data warehouse as type «constellation of facts». For getting of data models applies structural methodology and consider general principles of conceptual design. Using complex system, which can work with some information sources and represent data in convenient view for users will in-demand for analysis data selected subject domains and determination of possible relationships.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Cruz, I. F. y A. Rajendran. "Semantic data integration in hierarchical domains". IEEE Intelligent Systems 18, n.º 2 (marzo de 2003): 66–73. http://dx.doi.org/10.1109/mis.2003.1193659.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Sun, Ke, Hong Liu, Qixiang Ye, Yue Gao, Jianzhuang Liu, Ling Shao y Rongrong Ji. "Domain General Face Forgery Detection by Learning to Weight". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 3 (18 de mayo de 2021): 2638–46. http://dx.doi.org/10.1609/aaai.v35i3.16367.

Texto completo
Resumen
In this paper, we propose a domain-general model, termed learning-to-weight (LTW), that guarantees face detection performance across multiple domains, particularly the target domains that are never seen before. However, various face forgery methods cause complex and biased data distributions, making it challenging to detect fake faces in unseen domains. We argue that different faces contribute differently to a detection model trained on multiple domains, making the model likely to fit domain-specific biases. As such, we propose the LTW approach based on the meta-weight learning algorithm, which configures different weights for face images from different domains. The LTW network can balance the model's generalizability across multiple domains. Then, the meta-optimization calibrates the source domain's gradient enabling more discriminative features to be learned. The detection ability of the network is further improved by introducing an intra-class compact loss. Extensive experiments on several commonly used deepfake datasets to demonstrate the effectiveness of our method in detecting synthetic faces. Code and supplemental material are available at https://github.com/skJack/LTW.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Wofford, Haley A., Josh Myers-Dean, Brandon A. Vogel, Kevin Alexander Estrada Alamo, Frederick A. Longshore-Neate, Filip Jagodzinski y Jeanine F. Amacher. "Domain Analysis and Motif Matcher (DAMM): A Program to Predict Selectivity Determinants in Monosiga brevicollis PDZ Domains Using Human PDZ Data". Molecules 26, n.º 19 (5 de octubre de 2021): 6034. http://dx.doi.org/10.3390/molecules26196034.

Texto completo
Resumen
Choanoflagellates are single-celled eukaryotes with complex signaling pathways. They are considered the closest non-metazoan ancestors to mammals and other metazoans and form multicellular-like states called rosettes. The choanoflagellate Monosiga brevicollis contains over 150 PDZ domains, an important peptide-binding domain in all three domains of life (Archaea, Bacteria, and Eukarya). Therefore, an understanding of PDZ domain signaling pathways in choanoflagellates may provide insight into the origins of multicellularity. PDZ domains recognize the C-terminus of target proteins and regulate signaling and trafficking pathways, as well as cellular adhesion. Here, we developed a computational software suite, Domain Analysis and Motif Matcher (DAMM), that analyzes peptide-binding cleft sequence identity as compared with human PDZ domains and that can be used in combination with literature searches of known human PDZ-interacting sequences to predict target specificity in choanoflagellate PDZ domains. We used this program, protein biochemistry, fluorescence polarization, and structural analyses to characterize the specificity of A9UPE9_MONBE, a M. brevicollis PDZ domain-containing protein with no homology to any metazoan protein, finding that its PDZ domain is most similar to those of the DLG family. We then identified two endogenous sequences that bind A9UPE9 PDZ with <100 μM affinity, a value commonly considered the threshold for cellular PDZ–peptide interactions. Taken together, this approach can be used to predict cellular targets of previously uncharacterized PDZ domains in choanoflagellates and other organisms. Our data contribute to investigations into choanoflagellate signaling and how it informs metazoan evolution.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Dai, Chang Ying, Wen Tao Gong y Jing Liu. "Access Process of Data-Flow in Cross-Domain Usage Control Model Based on XACML". Advanced Materials Research 143-144 (octubre de 2010): 1275–79. http://dx.doi.org/10.4028/www.scientific.net/amr.143-144.1275.

Texto completo
Resumen
With the rapid development of information technology, more and more requesters need accessing the services in different access domains, which make the access process in cross-domain become more difficultly. The traditional access control models couldn’t solve the access process for their design limitations and diversity access policies. Usage control model (UCON) was proposed to strengthen the expression of access control model, but UCON is only a conceptual model. How to use the UCON in access process? It is worthwhile to further study. Extensible access control markup language (XACML) is an open standard XML-based language, which can be used to describe the security policy. In order to solve the access process in different access domains, based on XACML, access process of data-flow in cross-domain usage control model is proposed in the paper. Access process of data-flow cross different domains in XACML is introduced to solve the cross-domain problem. Finally, a small example is given to verify the effectiveness of access process.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Gantala, Thulsiram y Krishnan Balasubramaniam. "Implementing Data-Driven Approach for Modelling Ultrasonic Wave Propagation Using Spatio-Temporal Deep Learning (SDL)". Applied Sciences 12, n.º 12 (9 de junio de 2022): 5881. http://dx.doi.org/10.3390/app12125881.

Texto completo
Resumen
In this paper, we proposed a data-driven spatio-temporal deep learning (SDL) model, to simulate forward and reflected ultrasonic wave propagation in the 2D geometrical domain, by implementing the convolutional long short-term memory (ConvLSTM) algorithm. The SDL model learns underlying wave physics from the spatio-temporal datasets. Two different SDL models are trained, with the following time-domain finite element (FE) simulation datasets, by applying: (1) multi-point excitation sources inside the domain and (2) single-point excitation sources on the edge of the different geometrical domains. The proposed SDL models simulate ultrasonic wave dynamics, for the forward ultrasonic wave propagation in the different geometrical domains and reflected wave propagation phenomenon, from the geometrical boundaries such as curved, T-shaped, triangular, and rectangular domains, with varying frequencies and cycles. The SDL is a reliable model, which generates simulations faster than the conventional finite element solvers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Farghaly, Karim, F. H. Abanda, Christos Vidalakis y Graham Wood. "BIM-linked data integration for asset management". Built Environment Project and Asset Management 9, n.º 4 (9 de septiembre de 2019): 489–502. http://dx.doi.org/10.1108/bepam-11-2018-0136.

Texto completo
Resumen
Purpose The purpose of this paper is to investigate the transfer of information from the building information modelling (BIM) models to either conventional or advanced asset management platforms using Linked Data. To achieve this aim, a process for generating Linked Data in the asset management context and its integration with BIM data is presented. Design/methodology/approach The research design employs a participatory action research (PAR) approach. The PAR approach utilized two qualitative data collection methods, namely; focus group and interviews to identify and evaluate the required standards for the mapping of different domains. Also prototyping which is an approach of Software Development Methodology is utilized to develop the ontologies and Linked Data. Findings The proposed process offers a comprehensive description of the required standards and classifications in construction domain, related vocabularies and object-oriented links to ensure the effective data integration between different domains. Also the proposed process demonstrates the different stages, tools, best practices and guidelines to develop Linked Data, armed with a comprehensive use case Linked Data generation about building assets that consume energy. Originality/value The Linked Data generation and publications in the domain of AECO is still in its infancy and it also needs methodological guidelines to support its evolution towards maturity in its processes and applications. This research concentrates on the Linked Data applications with BIM to link across domains where few studies have been conducted.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Evans, Richard y Edward Grefenstette. "Learning Explanatory Rules from Noisy Data". Journal of Artificial Intelligence Research 61 (26 de enero de 2018): 1–64. http://dx.doi.org/10.1613/jair.5714.

Texto completo
Resumen
Artificial Neural Networks are powerful function approximators capable of modelling solutions to a wide variety of problems, both supervised and unsupervised. As their size and expressivity increases, so too does the variance of the model, yielding a nearly ubiquitous overfitting problem. Although mitigated by a variety of model regularisation methods, the common cure is to seek large amounts of training data--which is not necessarily easily obtained--that sufficiently approximates the data distribution of the domain we wish to test on. In contrast, logic programming methods such as Inductive Logic Programming offer an extremely data-efficient process by which models can be trained to reason on symbolic domains. However, these methods are unable to deal with the variety of domains neural networks can be applied to: they are not robust to noise in or mislabelling of inputs, and perhaps more importantly, cannot be applied to non-symbolic domains where the data is ambiguous, such as operating on raw pixels. In this paper, we propose a Differentiable Inductive Logic framework, which can not only solve tasks which traditional ILP systems are suited for, but shows a robustness to noise and error in the training data which ILP cannot cope with. Furthermore, as it is trained by backpropagation against a likelihood objective, it can be hybridised by connecting it with neural networks over ambiguous data in order to be applied to domains which ILP cannot address, while providing data efficiency and generalisation beyond what neural networks on their own can achieve.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Zhuo, Hankz Hankui, Qiang Yang, Rong Pan y Lei Li. "Cross-Domain Action-Model Acquisition for Planning via Web Search". Proceedings of the International Conference on Automated Planning and Scheduling 21 (22 de marzo de 2011): 298–305. http://dx.doi.org/10.1609/icaps.v21i1.13449.

Texto completo
Resumen
Applying learning techniques to acquire action models is an area of intense research interest. Most previous works in this area have assumed that there is a significant amount of training data available in a planning domain of interest, which we call target domain, where action models are to be learned. However, it is often difficult to acquire sufficient training data to ensure that the learned action models are of high quality. In this paper, we develop a novel approach to learning action models with limited training data in the target domain by transferring knowledge from related auxiliary or source domains. We assume that the action models in the source domains have already been created before, and seek to transfer as much of the the available information from the source domains as possible to help our learning task. We first exploit a Web searching method to bridge the target and source domains, such that transferrable knowledge from source domains is identified. We then encode the transferred knowledge together with the available data from the target domain as constraints in a maximum satisfiability problem, and solve these constraints using a weighted MAX-SAT solver. We finally transform the solutions thus obtained into high-quality target-domain action models. We empirically show that our transfer-learning based framework is effective in several domains, including the International Planning Competition (IPC) domains and some synthetic domains.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Arora, Preeti, Deepali Virmani y P. S. Kulkarni. "An Approach for Big Data to Evolve the Auspicious Information from Cross-Domains". International Journal of Electrical and Computer Engineering (IJECE) 7, n.º 2 (1 de abril de 2017): 967. http://dx.doi.org/10.11591/ijece.v7i2.pp967-974.

Texto completo
Resumen
Sentiment analysis is the pre-eminent technology to extract the relevant information from the data domain. In this paper cross domain sentimental classification approach Cross_BOMEST is proposed. Proposed approach will extract <strong>†</strong>ve words using existing BOMEST technique, with the help of Ms Word Introp, Cross_BOMEST determines <strong>†</strong>ve words and replaces all its synonyms to escalate the polarity and blends two different domains and detects all the self-sufficient words. Proposed Algorithm is executed on Amazon datasets where two different domains are trained to analyze sentiments of the reviews of the other remaining domain. Proposed approach contributes propitious results in the cross domain analysis and accuracy of 92 % is obtained. Precision and Recall of BOMEST is improved by 16% and 7% respectively by the Cross_BOMEST.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Sydorov, N. O. y N. M. Sydorova. "Software engineering and big data software". PROBLEMS IN PROGRAMMING, n.º 3-4 (diciembre de 2022): 69–72. http://dx.doi.org/10.15407/pp2022.03-04.069.

Texto completo
Resumen
Software engineering is a mature industry of human activity focused on the creation, deployment, marketing and maintenance of software. The fundamental concepts of engineering are life cycle model; three main components of life cycle phases - products, processes and resources; engineering and methodologies for creating, deployment and maintaining software. Software is the foun- dation of technological advances that lead to new high performance products. As the functionality of products grows, so does the need to efficiently and correctly create and maintain the complex software that enables this growth. Therefore, in addition to solving its own problems, software engineering serves the solution of the problems of creating and maintaining software in other domains, which are called application domains. Information technology is a well-known application domain. The basis of this domain is data. Information systems are being implemented in an organization to improve its effectiveness and efficiency. The functionality of information systems has grown dramatically when big data began to be used. This growth has led to the emergence of a wide variety of software-intensive big data information systems. At the same time, the role and importance of software engineering for solving the problems of this application domain has only intensified. Modern possibilities of software engineering are shown. The aspects of interaction between software engineering and big data systems are analyzed. The topics for the study of big data software ecosystems and big data system of systems are outlined.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Boroh, A. W., K. Y. Sore-Gamo, Ngounouno Ayiwouo, Mbowou Gbambie y I. Ngounouno. "Implication of geological domains data for modeling and estimating resources from Nkout iron deposit (South-Cameroun)". Journal of Mining and Metallurgy A: Mining 57, n.º 1 (2021): 1–17. http://dx.doi.org/10.5937/jmma2101001b.

Texto completo
Resumen
This paper is devoted to determining whether the addition of geological information can improve the resource estimate of mineral resources. The geochemical data used come from 116 drill holes in the Nkout East iron deposit in southern Cameroon. These geochemical data are modeled on Surpac and Isatis softwares to represent the 3D geochemical distribution of iron in the deposit. Statistical analysis and then a variographic study is performed to study the spatial variability of iron. Estimation domains were defined based on the results of geological and geochemical analyses. Four domains were determined. These domains are the saprolitic domain in particular; the poor domain or fresh rocks such as amphibolites, granites, and gneisses; the rich domain or oxidized rocks (BIF) and the metasediment domain. Block modeling of the deposit is performed to estimate the resource. The grade of each block was estimated by using ordinary kriging and composites from each domain. This study also consisted of comparing two types of estimate, notably the domain estimate and the global estimate. The cross-validation made it possible to authenticate the obtained models. From this comparison, the domain estimation brings more precision the global estimate precisely on the error analysis while if we take into account the point clouds of the predicted and estimated values, the estimation by geochemical modeling provides the best results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Hu, Xuming, Zhaochen Hong, Yong Jiang, Zhichao Lin, Xiaobin Wang, Pengjun Xie y Philip S. Yu. "Three Heads Are Better than One: Improving Cross-Domain NER with Progressive Decomposed Network". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 16 (24 de marzo de 2024): 18261–69. http://dx.doi.org/10.1609/aaai.v38i16.29785.

Texto completo
Resumen
Cross-domain named entity recognition (NER) tasks encourage NER models to transfer knowledge from data-rich source domains to sparsely labeled target domains. Previous works adopt the paradigms of pre-training on the source domain followed by fine-tuning on the target domain. However, these works ignore that general labeled NER source domain data can be easily retrieved in the real world, and soliciting more source domains could bring more benefits. Unfortunately, previous paradigms cannot efficiently transfer knowledge from multiple source domains. In this work, to transfer multiple source domains' knowledge, we decouple the NER task into the pipeline tasks of mention detection and entity typing, where the mention detection unifies the training object across domains, thus providing the entity typing with higher-quality entity mentions. Additionally, we request multiple general source domain models to suggest the potential named entities for sentences in the target domain explicitly, and transfer their knowledge to the target domain models through the knowledge progressive networks implicitly. Furthermore, we propose two methods to analyze in which source domain knowledge transfer occurs, thus helping us judge which source domain brings the greatest benefit. In our experiment, we develop a Chinese cross-domain NER dataset. Our model improved the F1 score by an average of 12.50% across 8 Chinese and English datasets compared to models without source domain data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Huang, Hong. "Domain knowledge and data quality perceptions in genome curation work". Journal of Documentation 71, n.º 1 (12 de enero de 2015): 116–42. http://dx.doi.org/10.1108/jd-08-2013-0104.

Texto completo
Resumen
Purpose – The purpose of this paper is to understand genomics scientists’ perceptions in data quality assurances based on their domain knowledge. Design/methodology/approach – The study used a survey method to collect responses from 149 genomics scientists grouped by domain knowledge. They ranked the top-five quality criteria based on hypothetical curation scenarios. The results were compared using χ2 test. Findings – Scientists with domain knowledge of biology, bioinformatics, and computational science did not reach a consensus in ranking data quality criteria. Findings showed that biologists cared more about curated data that can be concise and traceable. They were also concerned about skills dealing with information overloading. Computational scientists on the other hand value making curation understandable. They paid more attention to the specific skills for data wrangling. Originality/value – This study takes a new approach in comparing the data quality perceptions for scientists across different domains of knowledge. Few studies have been able to synthesize models to interpret data quality perception across domains. The findings may help develop data quality assurance policies, training seminars, and maximize the efficiency of genome data management.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Yesin, V. I. "Expressive means of the «object-event» data model". Radiotekhnika, n.º 191 (22 de diciembre de 2017): 99–112. http://dx.doi.org/10.30837/rt.2017.4.191.09.

Texto completo
Resumen
The relevance and importance of the problem of data representation in the subject domain modeling are shown. Expressive means (languages of conceptual modeling) are developed to represent conceptual models of subject domains in a graphic form, based on the "object-event" data model and being its constituent elements. Recommendations on their use are given.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Wu, Yuan y Yuhong Guo. "Dual Adversarial Co-Learning for Multi-Domain Text Classification". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 6438–45. http://dx.doi.org/10.1609/aaai.v34i04.6115.

Texto completo
Resumen
With the advent of deep learning, the performance of text classification models have been improved significantly. Nevertheless, the successful training of a good classification model requires a sufficient amount of labeled data, while it is always expensive and time consuming to annotate data. With the rapid growth of digital data, similar classification tasks can typically occur in multiple domains, while the availability of labeled data can largely vary across domains. Some domains may have abundant labeled data, while in some other domains there may only exist a limited amount (or none) of labeled data. Meanwhile text classification tasks are highly domain-dependent — a text classifier trained in one domain may not perform well in another domain. In order to address these issues, in this paper we propose a novel dual adversarial co-learning approach for multi-domain text classification (MDTC). The approach learns shared-private networks for feature extraction and deploys dual adversarial regularizations to align features across different domains and between labeled and unlabeled data simultaneously under a discrepancy based co-learning framework, aiming to improve the classifiers' generalization capacity with the learned features. We conduct experiments on multi-domain sentiment classification datasets. The results show the proposed approach achieves the state-of-the-art MDTC performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Lavbic, Dejan, Iztok Lajovic y Marjan Krisper. "Facilitating information system development with panoramic view on data". Computer Science and Information Systems 7, n.º 4 (2010): 737–67. http://dx.doi.org/10.2298/csis091122031l.

Texto completo
Resumen
The increasing amount of information and the absence of an effective tool for assisting users with minimal technical knowledge lead us to use associative thinking paradigm for implementation of a software solution - Panorama. In this study, we present object recognition process, based on context + focus information visualization techniques, as a foundation for realization of Panorama. We show that user can easily define data vocabulary of selected domain that is furthermore used as the application framework. The purpose of Panorama approach is to facilitate software development of certain problem domains by shortening the Software Development Life Cycle with minimizing the impact of implementation, review and maintenance phase. Our approach is focused on using and updating data vocabulary by users without extensive programming skills. Panorama therefore facilitates traversing through data by following associations where user does not need to be familiar with the query language, the data structure and does not need to know the problem domain fully. Our approach has been verified by detailed comparison to existing approaches and in an experiment by implementing selected use cases. The results confirmed that Panorama fits problem domains with emphasis on data oriented rather than ones with process oriented aspects. In such cases the development of selected problem domains is shortened up to 25%, where emphasis is mainly on analysis, logical design and testing, while omitting physical design and programming, which is performed automatically by Panorama tool.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Hargreaves, Shila Minari, Eduardo Yoshio Nakano, Heesup Han, António Raposo, Antonio Ariza-Montes, Alejandro Vega-Muñoz y Renata Puppin Zandonadi. "Quality of Life of Brazilian Vegetarians Measured by the WHOQOL-BREF: Influence of Type of Diet, Motivation and Sociodemographic Data". Nutrients 13, n.º 8 (30 de julio de 2021): 2648. http://dx.doi.org/10.3390/nu13082648.

Texto completo
Resumen
This study aimed to evaluate the general quality of life (QoL) of Brazilian vegetarians. A cross-sectional study was conducted with Brazilian vegetarian adults (18 years old and above). Individuals were recruited to participate in a nationwide online survey that comprised the WHOQOL-BREF as well as sociodemographic and characterization questions related to vegetarianism. The WHOQOL-BREF is composed of 24 items which are divided into four domains (domain 1: physical health; domain 2: psychological well-being; domain 3: social relationships; and domain 4: environment), plus two general items which were analyzed separately, totaling 26 items. The answers from the questionnaire were converted into scores with a 0–100 scale range, with separate analyses for each domain. Results were compared among groups based on the different characteristics of the vegetarian population. A total of 4375 individuals completed the survey. General average score results were 74.67 (domain 1), 66.71 (domain 2), 63.66 (domain 3) and 65.76 (domain 4). Vegans showed better scores when compared to the other vegetarians, except in domain four, where the statistical difference was observed only for semi-vegetarians (lower score). Individuals adopting a vegetarian diet for longer (>1 year) showed better results for domains one and two, with no difference for the other domains. Having close people also adopting a vegetarian diet positively influenced the results for all domains. On the other hand, it was not possible to distinguish any clear influence of the motivation for adopting a vegetarian diet on the scores’ results. Adopting a vegetarian diet does not have detrimental effects on one’s QoL. In fact, the more plant-based the diet, and the longer it was adopted, the better the results were.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Xu, Yifan, Kekai Sheng, Weiming Dong, Baoyuan Wu, Changsheng Xu y Bao-Gang Hu. "Towards Corruption-Agnostic Robust Domain Adaptation". ACM Transactions on Multimedia Computing, Communications, and Applications 18, n.º 4 (30 de noviembre de 2022): 1–16. http://dx.doi.org/10.1145/3501800.

Texto completo
Resumen
Great progress has been achieved in domain adaptation in decades. Existing works are always based on an ideal assumption that testing target domains are independent and identically distributed with training target domains. However, due to unpredictable corruptions (e.g., noise and blur) in real data, such as web images and real-world object detection, domain adaptation methods are increasingly required to be corruption robust on target domains. We investigate a new task, corruption-agnostic robust domain adaptation (CRDA), to be accurate on original data and robust against unavailable-for-training corruptions on target domains. This task is non-trivial due to the large domain discrepancy and unsupervised target domains. We observe that simple combinations of popular methods of domain adaptation and corruption robustness have suboptimal CRDA results. We propose a new approach based on two technical insights into CRDA, as follows: (1) an easy-to-plug module called domain discrepancy generator (DDG) that generates samples that enlarge domain discrepancy to mimic unpredictable corruptions; (2) a simple but effective teacher-student scheme with contrastive loss to enhance the constraints on target domains. Experiments verify that DDG maintains or even improves its performance on original data and achieves better corruption robustness than baselines. Our code is available at: https://github.com/YifanXu74/CRDA .
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Baxter, Rolf H., Neil M. Robertson y David M. Lane. "Human behaviour recognition in data-scarce domains". Pattern Recognition 48, n.º 8 (agosto de 2015): 2377–93. http://dx.doi.org/10.1016/j.patcog.2015.02.019.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Sgheri, Luca. "Joining RDC data from flexible protein domains". Inverse Problems 26, n.º 11 (15 de octubre de 2010): 115021. http://dx.doi.org/10.1088/0266-5611/26/11/115021.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Falcão, Rodrigo, Raghad Matar, Bernd Rauch, Frank Elberzhager y Matthias Koch. "A Reference Architecture for Enabling Interoperability and Data Sovereignty in the Agricultural Data Space". Information 14, n.º 3 (21 de marzo de 2023): 197. http://dx.doi.org/10.3390/info14030197.

Texto completo
Resumen
Agriculture is one of the major sectors of the global economy and also a software-intensive domain. The digital landscape of agriculture is composed of multiple digital ecosystems, which together constitute an agricultural domain ecosystem, also referred to as the “Agricultural Data Space’’ (ADS). As the domain is so huge, there are several sub-domains and specialized solutions, and each of them poses challenges to interoperability. Additionally, farmers have increasing concerns about data sovereignty. In the context of the research project COGNAC, we elicited architecture drivers for interoperability and data sovereignty in agriculture and designed a reference architecture of a platform that aims to address these qualities in the ADS. In this paper, we present the solution concepts and design decisions that characterize the reference architecture. Early prototypes have been developed and made available to support the validation of the concept.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Abhishek, Kumar, M. P. Singh, Deepika Shukla y Sachin Gupta. "OBDMR: An Ontology-Based Data Model for Railways". Journal of Computational and Theoretical Nanoscience 17, n.º 1 (1 de enero de 2020): 273–83. http://dx.doi.org/10.1166/jctn.2020.8662.

Texto completo
Resumen
The domain of the railway system is vast and complex since it includes several sub-domains hierarchy in it. These sub-domains include different branches of technology and operational hierarchy. Many types of research are running on and have happened in this vast domain along with different technologies. Among all available technologies ontology is the single one which talks about semantics and thus supports the decision support system. This paper proposes an OBDMR model for railway systems to integrate the information at the knowledge level. The paper has used railML (version 2.2) as a data resource as railML covers all the aspects of the railway system. railML (Railway Mark-up Language) is an open, XML-based data exchange format for data interoperability of railway application. The proposed ontology adds the semantics to the given data and even allows to infer new information from current data which XML cannot do. OBDMR is capable of taking decisions by automated reasoning using software agents. A generic model proposed in this paper satiates the standards and specifications of most countries’ railway systems. A use-case for Indian Railways is discussed with some examples.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Zhang, Fan, Xin Peng, Liang Huang, Man Zhu, Yuanqiao Wen y Haitao Zheng. "A Spatiotemporal Statistical Method of Ship Domain in the Inland Waters Driven by Trajectory Data". Journal of Marine Science and Engineering 9, n.º 4 (12 de abril de 2021): 410. http://dx.doi.org/10.3390/jmse9040410.

Texto completo
Resumen
In this study, a method for dynamically establishing ship domain in inland waters is proposed to help make decisions about ship collision avoidance. The surrounding waters of the target ship are divided to grids and then calculating the grid densities of ships in each moment to determine the shape and size of ship domain of different types of ships. At last, based on the spatiotemporal statistical method, the characteristics of ship domains of different types of ship in different navigational environments were analyzed. The proposed method is applied to establish ship domains of different types of ship in Wuhan section of the Yangtze River in January, February, July, and August in 2014. The results show that the size of ship domain increases as the ship size increases in each month. The domain size is significantly influenced by the water level, and the ship domain size in dry seasons is larger than in the wet seasons of inland waters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Wittich, D. y F. Rottensteiner. "ADVERSARIAL DOMAIN ADAPTATION FOR THE CLASSIFICATION OF AERIAL IMAGES AND HEIGHT DATA USING CONVOLUTIONAL NEURAL NETWORKS". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W7 (16 de septiembre de 2019): 197–204. http://dx.doi.org/10.5194/isprs-annals-iv-2-w7-197-2019.

Texto completo
Resumen
<p><strong>Abstract.</strong> Domain adaptation (DA) can drastically decrease the amount of training data needed to obtain good classification models by leveraging available data from a source domain for the classification of a new (target) domains. In this paper, we address deep DA, i.e. DA with deep convolutional neural networks (CNN), a problem that has not been addressed frequently in remote sensing. We present a new method for semi-supervised DA for the task of pixel-based classification by a CNN. After proposing an encoder-decoder-based fully convolutional neural network (FCN), we adapt a method for adversarial discriminative DA to be applicable to the pixel-based classification of remotely sensed data based on this network. It tries to learn a feature representation that is domain invariant; domain-invariance is measured by a classifier’s incapability of predicting from which domain a sample was generated. We evaluate our FCN on the ISPRS labelling challenge, showing that it is close to the best-performing models. DA is evaluated on the basis of three domains. We compare different network configurations and perform the representation transfer at different layers of the network. We show that when using a proper layer for adaptation, our method achieves a positive transfer and thus an improved classification accuracy in the target domain for all evaluated combinations of source and target domains.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Dau, Hoan Manh, Ning Xu y Tung Khac Truong. "A Survey of Using Weakly Supervised and Semi-Supervised for Cross-Domain Sentiment Classification". Advanced Materials Research 905 (abril de 2014): 637–41. http://dx.doi.org/10.4028/www.scientific.net/amr.905.637.

Texto completo
Resumen
Supervised machine learning techniques can analyze sentiment very effectively. However, in many languages, there are few appropriate data for training sentiment classifiers. Thus, they need a large corpus of training data. In this paper, weakly-supervised techniques using a large collection of unlabeled text to determine sentiment is presented. The performance of this method maybe less depends on the domain, topic and time period represented by the testing data. In addition, semi-supervised classification using a sentiment-sensitive thesaurus is mentioned. It can be applicable when it does not have any labeled data for a target domain but have some labeled data for other multiple domains designated as the source domains. This method can learn efficiently from multiple source domains. The results show that the weakly-supervised techniques are suitable for applications requiring sentiment classification across some domains and semi-supervised techniques can learn efficiently from multiple source domains.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Wiederhold, G. "Objects and Domains for Managing Medical Data and Knowledge". Methods of Information in Medicine 34, n.º 01/02 (1995): 40–46. http://dx.doi.org/10.1055/s-0038-1634583.

Texto completo
Resumen
Abstract:This paper assesses the object-oriented data paradigm, and describes an algebraic approach which permits the generation of data objects from relational data, based on the knowledge captured in a formal Entity-Relationship model, the Structural Model. The advantage is that now objects can be created that satisfy a variety of particular views, as long as the hierarchies represented by the views are subsumed in the network represented by the overall structural model.The disadvantage of creating view-objects dynamically is that the additional layering has performance implications, so that the speedup expected from object-oriented databases versus relational databases, due to their hierarchical object storage, cannot be realized. However, scalability of systems is increased since large systems tend to have multiple objectives, and hence often multiple valid hierarchical views over the data. This approach has been implemented in the Penguin project, and recently some commercial successors are emerging.In truly large systems new problems arise, namely that now not only multiple views will exist, but also that the domains to be covered by the data will be autonomous and hence heterogeneous. One result is that ontologies associated with the multiple domains will differ as well. This paper proposes a knowledge-based algebra over the ontologies, so that the domain knowledge can be partitioned for maintenance. Only the articulation points, where the domains intersect, have to be agreed upon as defined by matching rules which define the shared ontologies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Shi, Yongjie, Xianghua Ying y Jinfa Yang. "Deep Unsupervised Domain Adaptation with Time Series Sensor Data: A Survey". Sensors 22, n.º 15 (23 de julio de 2022): 5507. http://dx.doi.org/10.3390/s22155507.

Texto completo
Resumen
Sensors are devices that output signals for sensing physical phenomena and are widely used in all aspects of our social production activities. The continuous recording of physical parameters allows effective analysis of the operational status of the monitored system and prediction of unknown risks. Thanks to the development of deep learning, the ability to analyze temporal signals collected by sensors has been greatly improved. However, models trained in the source domain do not perform well in the target domain due to the presence of domain gaps. In recent years, many researchers have used deep unsupervised domain adaptation techniques to address the domain gap between signals collected by sensors in different scenarios, i.e., using labeled data in the source domain and unlabeled data in the target domain to improve the performance of models in the target domain. This survey first summarizes the background of recent research on unsupervised domain adaptation with time series sensor data, the types of sensors used, the domain gap between the source and target domains, and commonly used datasets. Then, the paper classifies and compares different unsupervised domain adaptation methods according to the way of adaptation and summarizes different adaptation settings based on the number of source and target domains. Finally, this survey discusses the challenges of the current research and provides an outlook on future work. This survey systematically reviews and summarizes recent research on unsupervised domain adaptation for time series sensor data to provide the reader with a systematic understanding of the field.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Batini, Carlo, Anisa Rula, Monica Scannapieco y Gianluigi Viscusi. "From Data Quality to Big Data Quality". Journal of Database Management 26, n.º 1 (enero de 2015): 60–82. http://dx.doi.org/10.4018/jdm.2015010103.

Texto completo
Resumen
This article investigates the evolution of data quality issues from traditional structured data managed in relational databases to Big Data. In particular, the paper examines the nature of the relationship between Data Quality and several research coordinates that are relevant in Big Data, such as the variety of data types, data sources and application domains, focusing on maps, semi-structured texts, linked open data, sensor & sensor networks and official statistics. Consequently a set of structural characteristics is identified and a systematization of the a posteriori correlation between them and quality dimensions is provided. Finally, Big Data quality issues are considered in a conceptual framework suitable to map the evolution of the quality paradigm according to three core coordinates that are significant in the context of the Big Data phenomenon: the data type considered, the source of data, and the application domain. Thus, the framework allows ascertaining the relevant changes in data quality emerging with the Big Data phenomenon, through an integrative and theoretical literature review.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Trad, Daniel O. y Jandyr M. Travassos. "Wavelet filtering of magnetotelluric data". GEOPHYSICS 65, n.º 2 (marzo de 2000): 482–91. http://dx.doi.org/10.1190/1.1444742.

Texto completo
Resumen
A method is described for filtering magnetotelluric (MT) data in the wavelet domain that requires a minimum of human intervention and leaves good data sections unchanged. Good data sections are preserved because data in the wavelet domain is analyzed through hierarchies, or scale levels, allowing separation of noise from signals. This is done without any assumption on the data distribution on the MT transfer function. Noisy portions of the data are discarded through thresholding wavelet coefficients. The procedure can recognize and filter out point defects that appear as a fraction of unusual observations of impulsive nature either in time domain or frequency domain. Two examples of real MT data are presented, with noise caused by both meteorological activity and power‐line contribution. In the examples given in this paper, noise is better seen in time and frequency domains, respectively. Point defects are filtered out to eliminate their deleterious influence on the MT transfer function estimates. After the filtering stage, data is processed in the frequency domain, using a robust algorithm to yield two sets of reliable MT transfer functions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Zhou, Kaiyang, Yongxin Yang, Timothy Hospedales y Tao Xiang. "Deep Domain-Adversarial Image Generation for Domain Generalisation". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 13025–32. http://dx.doi.org/10.1609/aaai.v34i07.7003.

Texto completo
Resumen
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution. To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains. In this paper, we propose a novel DG approach based on Deep Domain-Adversarial Image Generation (DDAIG). Specifically, DDAIG consists of three components, namely a label classifier, a domain classifier and a domain transformation network (DoTNet). The goal for DoTNet is to map the source training data to unseen domains. This is achieved by having a learning objective formulated to ensure that the generated data can be correctly classified by the label classifier while fooling the domain classifier. By augmenting the source training data with the generated unseen domain data, we can make the label classifier more robust to unknown domain changes. Extensive experiments on four DG datasets demonstrate the effectiveness of our approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Zarate, Luis, Bruno Petrocchi, Carlos Dias Maia, Caio Felix y Marco Paulo Gomes. "CAPTO - A method for understanding problem domains for data science projects". Concilium 23, n.º 15 (21 de agosto de 2023): 922–41. http://dx.doi.org/10.53660/clm-1815-23m33.

Texto completo
Resumen
Data Science aims to infer knowledge from facts and evidence expressed from data. This occurs through a knowledge discovery process (KDD), which requires an understanding of the application domain. However, in practice, not enough time is spent on understanding this domain, and consequently, the extracted knowledge may not be correct or not relevant. Considering that understanding the problem is an essential step in the KDD process, this work proposes the CAPTO method for understanding domains, based on knowledge management models, and together with the available/acquired tacit and explicit knowledge, proposes a strategy for construction of conceptual models to represent the problem domain. This model will contain the main dimensions (perspectives), aspects and attributes that may be relevant to start a data science project. As a case study, it will be applied in the Type 2 Diabetes domain. Results show the effectiveness of the method. The conceptual model, obtained through the CAPTO method, can be used as an initial step for the conceptual selection of attributes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Fillion, Luc, Monique Tanguay, Ervig Lapalme, Bertrand Denis, Michel Desgagne, Vivian Lee, Nils Ek et al. "The Canadian Regional Data Assimilation and Forecasting System". Weather and Forecasting 25, n.º 6 (1 de diciembre de 2010): 1645–69. http://dx.doi.org/10.1175/2010waf2222401.1.

Texto completo
Resumen
Abstract This paper describes the recent changes to the regional data assimilation and forecasting system at the Canadian Meteorological Center. A major aspect is the replacement of the currently operational global variable resolution forecasting approach by a limited-area nested approach. In addition, the variational analysis code has been upgraded to allow limited-area three- and four-dimensional variational data assimilation (3D- and 4DVAR) analysis approaches. As a first implementation step, the constraints were to impose similar background error correlation modeling assumptions, equal computer resources, and the use of the same assimilated data. Both bi-Fourier and spherical-harmonics spectral representations of background error correlations were extensively tested for the large horizontal domain considered for the Canadian regional system. Under such conditions, it is shown that the new regional data assimilation and forecasting system performs as well as the current operational system and it produces slightly better 24-h accumulated precipitation scores as judged from an ensemble of winter and summer cases. Because of the large horizontal extent of the regional domain considered, a spherical-harmonics spectral representation of background error correlations was shown to perform better than the bi-Fourier representation, considering all evaluation scores examined in this study. The latter is more suitable for smaller domains and will be kept for the upcoming use in the kilometric-scale local analysis domains in order to support the Canadian Meteorological Center’s (CMC’s) operations using multiple domains over Canada. The CMC’s new regional system [i.e., a regional limited-area 3DVAR data assimilation system coupled to a limited-area model (REG-LAM3D)] is now undergoing its final evaluations before operational transfer. Important model and data assimilation upgrades are currently under development to fully exploit this new system and are briefly presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Wang, Ke, Jiayong Liu y Jing-Yan Wang. "Learning Domain-Independent Deep Representations by Mutual Information Minimization". Computational Intelligence and Neuroscience 2019 (16 de junio de 2019): 1–14. http://dx.doi.org/10.1155/2019/9414539.

Texto completo
Resumen
Domain transfer learning aims to learn common data representations from a source domain and a target domain so that the source domain data can help the classification of the target domain. Conventional transfer representation learning imposes the distributions of source and target domain representations to be similar, which heavily relies on the characterization of the distributions of domains and the distribution matching criteria. In this paper, we proposed a novel framework for domain transfer representation learning. Our motive is to make the learned representations of data points independent from the domains which they belong to. In other words, from an optimal cross-domain representation of a data point, it is difficult to tell which domain it is from. In this way, the learned representations can be generalized to different domains. To measure the dependency between the representations and the corresponding domain which the data points belong to, we propose to use the mutual information between the representations and the domain-belonging indicators. By minimizing such mutual information, we learn the representations which are independent from domains. We build a classwise deep convolutional network model as a representation model and maximize the margin of each data point of the corresponding class, which is defined over the intraclass and interclass neighborhood. To learn the parameters of the model, we construct a unified minimization problem where the margins are maximized while the representation-domain mutual information is minimized. In this way, we learn representations which are not only discriminate but also independent from domains. An iterative algorithm based on the Adam optimization method is proposed to solve the minimization to learn the classwise deep model parameters and the cross-domain representations simultaneously. Extensive experiments over benchmark datasets show its effectiveness and advantage over existing domain transfer learning methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía