Letteratura scientifica selezionata sul tema "Task Group on Discovery and Metadata"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Task Group on Discovery and Metadata".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Task Group on Discovery and Metadata"

1

Turp, Clara, Lee Wilson, Julienne Pascoe e Alex Garnett. "The Fast and the FRDR: Improving Metadata for Data Discovery in Canada". Publications 8, n. 2 (2 maggio 2020): 25. http://dx.doi.org/10.3390/publications8020025.

Testo completo
Abstract (sommario):
The Federated Research Data Repository (FRDR), developed through a partnership between the Canadian Association of Research Libraries’ Portage initiative and the Compute Canada Federation, improves research data discovery in Canada by providing a single search portal for research data stored across Canadian governmental, institutional, and discipline-specific data repositories. While this national discovery layer helps to de-silo Canadian research data, challenges in data discovery remain due to a lack of standardized metadata practices across repositories. In recognition of this challenge, a Portage task group, drawn from a national network of experts, has engaged in a project to map subject keywords to the Online Computer Library Center’s (OCLC) Faceted Application of Subject Terminology (FAST) using the open source OpenRefine software. This paper will describe the task group’s project, discuss the various approaches undertaken by the group, and explore how this work improves data discovery and may be adopted by other repositories and metadata aggregators to support metadata standardization.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Hoarfrost, Adrienne, Nick Brown, C. Titus Brown e Carol Arnosti. "Sequencing data discovery with MetaSeek". Bioinformatics 35, n. 22 (21 giugno 2019): 4857–59. http://dx.doi.org/10.1093/bioinformatics/btz499.

Testo completo
Abstract (sommario):
Abstract Summary Sequencing data resources have increased exponentially in recent years, as has interest in large-scale meta-analyses of integrated next-generation sequencing datasets. However, curation of integrated datasets that match a user’s particular research priorities is currently a time-intensive and imprecise task. MetaSeek is a sequencing data discovery tool that enables users to flexibly search and filter on any metadata field to quickly find the sequencing datasets that meet their needs. MetaSeek automatically scrapes metadata from all publicly available datasets in the Sequence Read Archive, cleans and parses messy, user-provided metadata into a structured, standard-compliant database and predicts missing fields where possible. MetaSeek provides a web-based graphical user interface and interactive visualization dashboard, as well as a programmatic API to rapidly search, filter, visualize, save, share and download matching sequencing metadata. Availability and implementation The MetaSeek online interface is available at https://www.metaseek.cloud/. The MetaSeek database can also be accessed via API to programmatically search, filter and download all metadata. MetaSeek source code, metadata scrapers and documents are available at https://github.com/MetaSeek-Sequencing-Data-Discovery/metaseek/.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Hagen, Brianne. "Book Review: Managing Metadata in Web-scale Discovery Systems". Library Resources & Technical Services 61, n. 3 (14 luglio 2017): 172. http://dx.doi.org/10.5860/lrts.61n3.172.

Testo completo
Abstract (sommario):
Managing metadata in libraries today presents challenges to information professionals concerned with quality control, providing relevant search results, and taming the volume of items available for access in a web-scale discovery system. No longer are libraries limited to the collections they “own.” Catalogers and metadata professionals now assume the responsibility of providing access to millions of resources, often with limitations on who can access that resource. Relationships with vendors provide opportunities to help manage the gargantuan scale of information. Of course those opportunities come with their own problems as relationships among vendors can be contentious, leaving metadata managers to figure out quality control on a grand scale. In addition to this politicized information landscape, new ways of managing and creating metadata are emerging, leaving information professionals with the task of managing multiple schema in different formats. The essays in Managing Metadata in Web-scale Discovery Systems seek to address issues in managing the large scale of information overwhelming catalogers today, with potential solutions for taming the beast of exponentially increasing data.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Miles, Simon, Juri Papay, Terry Payne, Michael Luck e Luc Moreau. "Towards a Protocol for the Attachment of Metadata to Grid Service Descriptions and Its Use in Semantic Discovery". Scientific Programming 12, n. 4 (2004): 201–11. http://dx.doi.org/10.1155/2004/170481.

Testo completo
Abstract (sommario):
Service discovery in large scale, open distributed systems is difficult because of the need to filter out services suitable to the task at hand from a potentially huge pool of possibilities. Semantic descriptions have been advocated as the key to expressive service discovery, but the most commonly used service descriptions and registry protocols do not support such descriptions in a general manner. In this paper, we present a protocol, its implementation and an api for registering semantic service descriptions and other task/user-specific metadata, and for discovering services according to these. Our approach is based on a mechanism for attaching structured and unstructured metadata, which we show to be applicable to multiple registry technologies. The result is an extremely flexible service registry that can be the basis of a sophisticated semantically-enhanced service discovery engine, an essential component of a Semantic Grid.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Michel, Franck, e The Bioschemas Community. "Bioschemas & Schema.org: a Lightweight Semantic Layer for Life Sciences Websites". Biodiversity Information Science and Standards 2 (22 maggio 2018): e25836. http://dx.doi.org/10.3897/biss.2.25836.

Testo completo
Abstract (sommario):
Web portals are commonly used to expose and share scientific data. They enable end users to find, organize and obtain data relevant to their interests. With the continuous growth of data across all science domains, researchers commonly find themselves overwhelmed as finding, retrieving and making sense of data becomes increasingly difficult. Search engines can help find relevant websites, but the short summarizations they provide in results lists are often little informative on how relevant a website is with respect to research interests. To yield better results, a strategy adopted by Google, Yahoo, Yandex and Bing involves consuming structured content that they extract from websites. Towards this end, the schema.org collaborative community defines vocabularies covering common entities and relationships (e.g., events, organizations, creative works) (Guha et al. 2016). Websites can leverage these vocabularies to embed semantic annotations within web pages, in the form of markup using standard formats. Search engines, in turn, exploit semantic markup to enhance the ranking of most relevant resources while providing more informative and accurate summarization. Additionally, adding such rich metadata is a step forward to make data FAIR, i.e. Findable, Accessible, Interoperable and Reusable. Although schema.org encompasses terms related to data repositories, datasets, citations, events, etc., it lacks specialized terms for modeling research entities. The Bioschemas community (Garcia et al. 2017) aims to extend schema.org to support markup for Life Sciences websites. A major pillar lies in reusing types from schema.org as well as well-adopted domain ontologies, while only proposing a limited set of new types. The goal is to enable semantic cross-linking between knowledge graphs extracted from marked-up websites. An overview of the main types is presented in Fig. 1. Bioschemas also provides profiles that specify how to describe an entity of some type. For instance, the protein profile requires a unique identifier, recommends to list transcribed genes and associated diseases, and points to recommended terms from the Protein Ontology and Semantic Science Integrated Ontology. The success of schema.org lies in its simplicity and the support by major search engines. By extending schema.org, Bioschemas enables life sciences research communities to benefit from a lightweight semantic layer on websites and thus facilitates discoverability and interoperability across them. From an initial pilot including just a few bio-types such as proteins and samples, the Bioschemas community has grown and is now opening up towards other disciplines. The biodiversity domain is a promising candidate for such further extensions. We can think of additional profiles to account for biodiversity-related information. For instance, since taxonomic registers are the backbone of many web portals and databases, new profiles could describe taxa and scientific names while reusing well-adopted vocabularies such as Darwin Core terms (Baskauf et al. 2016) or TDWG ontologies (TDWG Vocabulary Management Task Group 2013). Fostering the use of such markup by web portals reporting traits, observations or museum collections could not only improve information discovery using search engines, but could also be a key to spur large-scale biodiversity data integration scenarios.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Williamschen, Jodi. "Work in Progress: The PCC Task Group on Metadata Application Profiles". Cataloging & Classification Quarterly 58, n. 3-4 (30 gennaio 2020): 458–63. http://dx.doi.org/10.1080/01639374.2020.1717708.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Evans, Bruce J., Karen Snow, Elizabeth Shoemaker, Maurine McCourry, Allison Yanos, Jennifer A. Liss e Susan Rathbun-Grubb. "Competencies through Community Engagement: Developing the Core Competencies for Cataloging and Metadata Professional Librarians". Library Resources & Technical Services 62, n. 4 (3 ottobre 2018): 188. http://dx.doi.org/10.5860/lrts.62n4.188.

Testo completo
Abstract (sommario):
In 2015 the Association for Library Collections and Technical Services Cataloging and Metadata Management Section (ALCTS CaMMS) Competencies for a Career in Cataloging Interest Group (CECCIG) charged a task force to create a core competencies document for catalogers. The process leading to the final document, the Core Competencies for Cataloging and Metadata Professional Librarians, involved researching the use of competencies documents, envisioning an accessible final product, and engaging in collaborative writing. Additionally, the task force took certain measures to solicit and incorporate feedback from the cataloging community throughout the entire process. The Competencies document was approved by the ALCTS Board of Directors in January 2017. Task force members who were involved in the final stages of the document’s creation detail their processes and purposes in this paper and provide recommendations for groups approaching similar tasks.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Ocvirk, Pierre, Gilles Landais, Laurent Michel, Heddy Arab, Sylvain Guehenneux, Thomas Boch, Marianne Brouty et al. "Associated data: Indexation, discovery, challenges and roles". EPJ Web of Conferences 186 (2018): 02002. http://dx.doi.org/10.1051/epjconf/201818602002.

Testo completo
Abstract (sommario):
Astronomers are nowadays required by their funding agencies to make the data obtained through public-financed means (ground and space observatories and labs) available to the public and the community at large. This is a fundamental step in enabling the open science paradigm the astronomical community is striving for. In other words, tabular data (catalogs) arriving to CDS for ingestion into its databases, in particular VizieR, is more and more frequently accompanied by the reduced observed dataset (spectra, images, data cubes, time series). While the benefits of making this associated data available are obvious, the task is very challenging: in this context "big data" takes the meaning of "extremely heterogeneous data", with a diversity of formats and practices among astronomers, even within the FITS standard. Providing librarians with efficient tools to index this data and generate the relevant metadata is therefore paramount.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Su, Shian, Vincent J. Carey, Lori Shepherd, Matthew Ritchie, Martin T. Morgan e Sean Davis. "BiocPkgTools: Toolkit for mining the Bioconductor package ecosystem". F1000Research 8 (29 maggio 2019): 752. http://dx.doi.org/10.12688/f1000research.19410.1.

Testo completo
Abstract (sommario):
Motivation: The Bioconductor project, a large collection of open source software for the comprehension of large-scale biological data, continues to grow with new packages added each week, motivating the development of software tools focused on exposing package metadata to developers and users. The resulting BiocPkgTools package facilitates access to extensive metadata in computable form covering the Bioconductor package ecosystem, facilitating downstream applications such as custom reporting, data and text mining of Bioconductor package text descriptions, graph analytics over package dependencies, and custom search approaches. Results: The BiocPkgTools package has been incorporated into the Bioconductor project, installs using standard procedures, and runs on any system supporting R. It provides functions to load detailed package metadata, longitudinal package download statistics, package dependencies, and Bioconductor build reports, all in "tidy data" form. BiocPkgTools can convert from tidy data structures to graph structures, enabling graph-based analytics and visualization. An end-user-friendly graphical package explorer aids in task-centric package discovery. Full documentation and example use cases are included. Availability: The BiocPkgTools software and complete documentation are available from Bioconductor (https://bioconductor.org/packages/BiocPkgTools).
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Kaiser, Kathryn A., John Chodacki, Ted Habermann, Jennifer Kemp, Laura Paglione, Michelle Urberg e T. Scott Plutchak. "Metadata: The accelerant we need". Information Services & Use 40, n. 3 (10 novembre 2020): 181–91. http://dx.doi.org/10.3233/isu-200094.

Testo completo
Abstract (sommario):
Large-scale pandemic events have sent scientific communities scrambling to gather and analyze data to provide governments and policy makers with information to inform decisions and policies needed when imperfect information is all that may be available. Historical records from the 1918 influenza pandemic reflect how little improvement has been made in how government and policy responses are formed when large scale threats occur, such as the COVID-19 pandemic. This commentary discusses three examples of how metadata improvements are being, or may be made, to facilitate gathering and assessment of data to better understand complex and dynamic situations. In particular, metadata strategies can be applied in advance, on the fly or even after events to integrate and enrich perspectives that aid in creating balanced actions to minimize impacts with lowered risk of unintended consequences. Metadata can enhance scope, speed and clarity with which scholarly communities can curate their outputs for optimal discovery and reuse. Conclusions are framed within the Metadata 2020 working group activities that lay a foundation for advancement of scholarly communications to better serve all communities.
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Task Group on Discovery and Metadata"

1

Yousefi, Niloofar, Michael Georgiopoulos e Georgios C. Anagnostopoulos. "Multi-Task Learning with Group-Specific Feature Space Sharing". In Machine Learning and Knowledge Discovery in Databases, 120–36. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23525-7_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Aguado, D., J. Alferes, A. Menniti, Q. Plana, M. V. Ruano, O. Samuelsson, K. Villez e E. Zegers. "Conclusions and outlook". In Metadata Collection and Organization in Wastewater Treatment and Wastewater Resource Recovery Systems, 241–54. IWA Publishing, 2024. http://dx.doi.org/10.2166/9781789061154_0241.

Testo completo
Abstract (sommario):
In this chapter, we provide notes on the state of metadata collection and organization practices and reflect on the achievements of the MetaCO task group. We also include an utility perspective on how to use this report in daily data management practice and a short guide is provided to show how metadata practices can be initiated. The chapter, and the report, is concluded within an outlook, sketching future progress and opportunities on the horizon.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Aguado, D., J. Alferes, Q. Plana, M. V. Ruano, O. Samuelsson e K. Villez. "Introduction". In Metadata Collection and Organization in Wastewater Treatment and Wastewater Resource Recovery Systems, 1–6. IWA Publishing, 2024. http://dx.doi.org/10.2166/9781789061154_0001.

Testo completo
Abstract (sommario):
This report provides a comprehensive overview of metadata to describe sensor signals in wastewater treatment plants and methods to obtain such metadata. In this introduction, we explain the original motivation behind the MetaCO task group. This includes a description of historical challenges (data volume, data velocity) for which mature technology is now available, and newer challenges, which relate to data structure (data variety) and data quality (veracity). We conclude the chapter with an expression of gratitude to all involved.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Strohmaier, Robert, Gerhard Sprung, Alexander Nischelwitzer e Sandra Schadenbauer. "Usability Testing of Mixed Reality Scenarios: A Hands-on Report". In Updates on Software Usability [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.107792.

Testo completo
Abstract (sommario):
We would like to share our insights in designing, preparing, preforming, and analyzing usability tests for multiple connected augmented reality and virtual reality applications as well as traditional mobile applications developed for a multimodal screening tool. This screening tool is under development at the University of Applied Sciences FH JOANNEUM in Graz, Austria. Several researchers from the departments of health studies and applied computer sciences are working closely together to establish a tool for early diagnosis of cognitive impairments to contribute to the management of dementia. The usability of this screening tool was evaluated by ten therapists paired with ten clients as testing group 1 and two usability experts in a separate test (group 2). In this chapter, we would like to describe why we use observed summative evaluation using the co-discovery method followed by post-task questionnaires for the first testing group. We are going to discuss the reasons for performing the cognitive walkthrough method as co-discovery with usability experts of testing group two as well. Furthermore, we describe how we use camera recordings (traditional cameras, 360-degree cameras), screen recording, and special tailor-made software to experience the screening process through the user’s eyes.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Theng, Yin-Leng, Nyein Chan Lwin Lwin, Jin-Cheon Na, Schubert Foo e Dion Hoe-Lian Goh. "Design and Development of a Taxonomy Generator". In Handbook of Research on Digital Libraries, 73–84. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-59904-879-6.ch008.

Testo completo
Abstract (sommario):
This chapter addresses the issues of resource discovery in digital libraries (DLs) and the importance of knowledge organization tools in building DLs. Using the Greenstone digital library (GSDL) software as a case example, we describe a taxonomy generation tool (TGT) prototype, a hierarchical classification of contents module, designed and built to categorize contents within DLs. TGT was developed as a desktop application using Microsoft .NET Framework 2.0 in Visual C# language and object-oriented programming. In TGT, Z39.19 was implemented providing standard guidelines to construct, format, and manage monolingual controlled vocabularies, usage of broader terms, narrower terms and related terms as well as their semantic relationships, and the simple knowledge organization system (SKOS) for vocabulary specification. The XML schema definition was designed to validate against rules developed for the XML taxonomy template, hence, resulting in the generated taxonomy template supporting controlled vocabulary terms as well as allowing users to select the labels for the taxonomy structure. A pilot user study was then conducted to evaluate the usability and usefulness of TGT and the taxonomy template. In this study, we observed four subjects using TGT, followed by a focus group for comments. Initial feedback was positive, indicating the importance of having a taxonomy structure in GSDL. Recommendations for future work include content classification and metadata technologies in TGT.
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Task Group on Discovery and Metadata"

1

Zheng, Zimu, Yuqi Wang, Quanyu Dai, Huadi Zheng e Dan Wang. "Metadata-driven Task Relation Discovery for Multi-task Learning". In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/615.

Testo completo
Abstract (sommario):
Task Relation Discovery (TRD), i.e., reveal the relation of tasks, has notable value: it is the key concept underlying Multi-task Learning (MTL) and provides a principled way for identifying redundancies across tasks. However, task relation is usually specifically determined by data scientist resulting in the additional human effort for TRD, while transfer based on brute-force methods or mere training samples may cause negative effects which degrade the learning performance. To avoid negative transfer in an automatic manner, our idea is to leverage commonly available context attributes in nowadays systems, i.e., the metadata. In this paper, we, for the first time, introduce metadata into TRD for MTL and propose a novel Metadata Clustering method, which jointly uses historical samples and additional metadata to automatically exploit the true relatedness. It also avoids the negative transfer by identifying reusable samples between related tasks. Experimental results on five real-world datasets demonstrate that the proposed method is effective for MTL with TRD, and particularly useful in complicated systems with diverse metadata but insufficient data samples. In general, this study helps in automatic relation discovery among partially related tasks and sheds new light on the development of TRD in MTL through the use of metadata as apriori information.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Morgan, O. "Wrappers and Metadata Sub Group". In IEE Colloquium on The EBU-SMPTE Task Force: Building an Infrastructure for Managing Compressed Video Systems. IEE, 1997. http://dx.doi.org/10.1049/ic:19971299.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Vucinic, Dean, Marina Pesut, Franjo Jovic e Chris Lacor. "Exploring Ontology-Based Approach to Facilitate Integration of Multi-Physics and Visualization for Numerical Models". In ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-86477.

Testo completo
Abstract (sommario):
Today, within the engineering design process, we have interactions between different design teams, where each team has its own design objective and continuous need to present and share results with other groups. Common engineering environments are equipped with advanced modeling and simulation tools, specially designed to improve engineer’s productivity. In this paper we propose the use of ontologies, the semantic metadata descriptors, to facilitate the software development process in building such multidisciplinary engineering environments. The important development task is to perform integration of several numerical simulation components (models of data and processes) together with the interactive visualization of the engineering models in a unified 3D scene. In addition, we explore the possibilities on how the prototyped ontologies can become standard components in such software systems, where the presence of the inference engine grants and enables continuous semantic integration of the involved data and processes. The semantic integration is based on: 1) mapping discovery between two or more ontologies, 2) declarative formal representation of mappings to enable 3) reasoning with mappings and find what types of reasoning are involved; and we have explored these three dimensions. The proposed solution involves two web based software standards: Semantic Web and X3D. The developed prototype make use of the “latest” available XML-based software technologies, such X3D (eXtensible 3D) and OWL (Web Ontology Language), and demonstrates the modeling approach to integrate heterogeneous data sources, their interoperability and 3D visual representations to enhance the end-users interactions with the engineering content. We demonstrate that our ontology-based approach is appropriate for the reuse, share and exchange of software constructs, which implements differential-geometric algorithms used in multidisciplinary numerical simulations, by applying adopted ontologies that are used in the knowledge-based systems. The selected engineering test case represents a complex multi-physics problem FSI (Fluid Structure Interaction). It involves numerical simulations of a multi-component box structure used for the drop test in a still water. The numerical simulations of the drop test are performed through combined used of the FEM (Finite Element Method) and CFD (Computational Fluid Dynamics) solvers. The important aspect is the design of a common graphics X3D model, which combines the FEM data model, which is coupled with the CFD data model in order to preserve all the relationships between CFD and FEM data. Our ultimate vision is to build intelligent and powerful mechanical engineering software by developing infrastructure that may enable efficient data sharing and process integration mechanisms. We see our current work in exploring the ontology-based approach as a first step towards semantic interoperability of numerical simulations and visualization components for designing complex multi-physics solutions.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Arlitt, Ryan, Anthony Nix, Robert Stone e Chiradeep Sen. "Discovery of Mental Metadata Used for Analogy Formation in Function-Based Design". In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-46963.

Testo completo
Abstract (sommario):
Applying previous solutions to solve new problems is a core aspect of design. In this context, analogies provide a mechanism to reapply previous solutions in new ways, but analogy formation is limited by a designer’s knowledge. One approach toward improving a designer’s analogy-forming capabilities is to provide an easy-to-use computational means of retrieving a wide breadth of relevant analogies. This work aims to answer what types of similarity are commonly used to draw design analogies, and whether some types of similarity are used more frequently in compound analogy versus single analogy. In this study, an experiment was performed to observe and document the types of information that designers found useful when forming analogies during conceptual design. A categorization of this information is sought in order to inform (1) the types of similarity data to store in an intuitive design-by-analogy database and (2) the form that a search query should take. The experiment consists of a design task and a follow up interview. Ten mechanical engineering graduate students specializing in design participated. These participants were interviewed, and their internal knowledge queries were encoded to reflect their objectives, thought process detail, direction of reasoning, and subject behavior type. Each conceptual design is cataloged according to whether it represents a compound analogy, a single analogy, or no analogy. The results show little difference between the types of information used in compound versus single analogy. Function, flow, and form information were all observed during analogy formation, indicating that all three types of information should play a role in a design-by-analogy database, regardless of generative goal. Notably, flow behavior was a commonly observed type of abstract similarity across domains. This points to the value of capturing flow behavior abstraction in engineering analogy databases.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

da Silva, E. H. M., J. Laterza e T. P. Faleiros. "New State-of-the-Art for Question Answering on Portuguese SQuAD v1.1". In Symposium on Knowledge Discovery, Mining and Learning. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/kdmile.2022.227787.

Testo completo
Abstract (sommario):
In the Natural Language Processing field (NLP), Machine Reading Comprehension (MRC), which involves teaching computers to read a text and understand its meaning, has been a major research goal over the last few decades. A natural way to evaluate whether a computer can fully understand a piece of text or, in other words, test a machine’s reading comprehension, is to require it to answer questions about the text. In this sense, Question Answering (QA) has received increasing attention among NLP tasks. For this study, we fine-tuned BERT Portuguese language models (BERTimbau Base and BERTimbau Large) on SQuAD-BR - the SQuAD v.1.1 dataset translated to Portuguese by the Deep Learning Brazil group - for Extractive QA task, in order to achieve better performance than other existing models trained on the dataset. As a result, we accomplished our objective, establishing the new state-of-the-art on SQuAD-BR dataset using BERTimbau Large fine-tuned model.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Zhai, Yuyao, Liang Chen e Minghua Deng. "Realistic Cell Type Annotation and Discovery for Single-cell RNA-seq Data". In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/552.

Testo completo
Abstract (sommario):
The rapid development of single-cell RNA sequencing (scRNA-seq) technologies allows us to explore tissue heterogeneity at the cellular level. Cell type annotation plays an essential role in the substantial downstream analysis of scRNA-seq data. Existing methods usually classify the novel cell types in target data as an “unassigned” group and rarely discover the fine-grained cell type structure among them. Besides, these methods carry risks, such as susceptibility to batch effect between reference and target data, thus further compromising of inherent discrimination of target data. Considering these limitations, here we propose a new and practical task called realistic cell type annotation and discovery for scRNA-seq data. In this task, cells from seen cell types are given class labels, while cells from novel cell types are given cluster labels. To tackle this problem, we propose an end-to-end algorithm framework called scPOT from the perspective of optimal transport (OT). Specifically, we first design an OT-based prototypical representation learning paradigm to encourage both global discriminations of clusters and local consistency of cells to uncover the intrinsic structure of target data. Then we propose an unbalanced OT-based partial alignment strategy with statistical filling to detect the cells from the seen cell types across reference and target data. Notably, scPOT also introduces an easy yet effective solution to automatically estimate the overall cell type number in target data. Extensive results on our carefully designed evaluation benchmarks demonstrate the superiority of scPOT over various state-of-the-art clustering and annotation methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Aguado García, Daniel, Frank Blumensaat, Juan Antonio Baeza, Kris Villez, Mª Victoria Ruano, Oscar Samuelsson, Queralt Plana e Janelcy Alferez. "Metadata: a must for the digital transition of wastewater treatment plants". In 2nd WDSA/CCWI Joint Conference. València: Editorial Universitat Politècnica de València, 2022. http://dx.doi.org/10.4995/wdsa-ccwi2022.2022.14249.

Testo completo
Abstract (sommario):
The increment in the number and diversity of available (and affordable) sensors together with the advances in information and communications technologies have made it possible to routinely measure and collect large amounts of data at wastewater treatment plants (WWTPs). This enormous amount of available data has boosted the interest in applying sound data-driven solutions to improve the current normal daily operation of these facilities. However, to have a real impact in current operation practices, useful information from the massive amount of data available should be extracted and turned into actionable knowledge. Machine learning (ML) techniques can search into large amounts of data to reveal patterns that a priori are not evident. ML can be applied to develop high-performance algorithms useful for different tasks such as pattern recognition, anomaly detection, clustering, visualization, classification, and regression. These ML algorithms are very good for data interpolation, but its extrapolation capabilities are low. Hence, the data available for training these data-driven models require points covering the complete space for the independent variables. A significant amount of data is required for this purpose, but data of good quality. To transform big data into smart data, giving value to the massive amount of data collected, it is of paramount importance to guarantee data quality to avoid “garbage in – garbage out”. The reliability of on-line measurements is a hard challenge in the wastewater sector. Wastewater is a harsh environment and poses a significant challenge to achieve sensor accuracy, precision, and responsiveness during long-term use. Despite the huge amount of data that currently being recorded at WWTPs, in many cases nothing is yet being done with them (resulting in data graveyards). Moreover, the use of the data collected is indeed very limited due to the lack of documentation of the data generation process and the lack of data quality assessment. Metadata is descriptive information of the collected data, such as the original purpose, the data-generating devices, the quality, and the context. Metadata is needed to clearly identify the data that should be used for the development of data-driven models. These data should be selected from the same category. If we include data that shouldn’t be in the same data set because they were obtained under different operational conditions, this would lead to unreliable model predictions. ML algorithms learn from data, thus to be useful tools and to really improve the decision-making process in WWTP operation and control, representative, reliable, annotated and high-quality data are needed. Effective digitalization requires the cultivation of good meta-data management practices. Unfortunately, there are no wastewater-specific guidelines available to the production, selection, prioritization, and management of meta-data. To address this challenge, the IWA Task Group on Meta-Data Collection and Organisation (MetaCO TG to which the authors of this paper belong) which has been supported by the International Water Association since 2020 will soon finish the scientific and technical report containing such guidelines specifically for WWTPs. This paper highlights why meta-data should be considered when collecting data as part of good digitalisation practices.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Pupezescu, Valentin. "PULSATING MULTILAYER PERCEPTRON". In eLSE 2016. Carol I National Defence University Publishing House, 2016. http://dx.doi.org/10.12753/2066-026x-16-035.

Testo completo
Abstract (sommario):
The Knowledge Discovery in Databases represents the process of extracting useful information from data that are stored in real databases. The Knowledge Discovery in Databases process consists of multiple steps which include selection target data from raw data, preprocessing, data transformation, Data Mining and interpretation of mined data. As we see, the Data Mining is one step from the whole process and it will perform one of these Data Mining task: classification, regression, clustering, association rules, summarization, dependency modelling, change and deviation detection. In this experiments I used one neural network(multilayer perceptron) that performs the classification task. This paper proposes a functioning model for the classical multilayer perceptron that is a sequential simulation of a Distributed Committee Machine. Committee Machines are a group of neural structures that work in a distributed manner as a group in order to obtain better classification results than individual neural networks. The classical backpropagation algorithm is modified in order to simulate the execution of multiple multilayer perceptrons that run in a sequential manner. The classification was made for three standard data sets: iris1, wine1 and conc1. In my case the backpropagation algorithm still consists of three well known stages: the feedforward of the input training pattern, the calculation of the associated output error, and the correction of the weights. The proposed model makes a twist for the classical backpropagation algorithm meaning that all the weights of the multilayer perceptron will be reset and randomly regenerated after a certain number of training epochs. This model will have a pulsating effect that will also prevent the blockage of the perceptron on poor local minimum points. This research is useful in the Knowledge Discovery in Databases process because the classification gets the same performance results as in the case of a Distributed Committee Machine.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Mathews, Larry K. "An Industry Program for Managing PWSCC of Closure Head Penetrations". In 10th International Conference on Nuclear Engineering. ASMEDC, 2002. http://dx.doi.org/10.1115/icone10-22365.

Testo completo
Abstract (sommario):
The Alloy 600 Issues Task Group of the EPRI managed Materials Reliability Program initiated an industry program to address the generic aspects of the Alloy 182 weld cracking in the A hot leg nozzle weld at V. C. Summer in December 2000. The generic affects of the recent cracks in the Alloy 600 Control Rod Drive Module (CRDM) and thermocouple (T/C) nozzles at Oconee 1 in November 2000 would also be included. This need for a concerted industry effort for head penetrations was further emphasized by the discovery of similar cracking at Arkansas Nuclear One (ANO) Unit 1 and the other two Oconee units during early 2001. Prior to the experiences at Oconee and ANO, there had been only one reported case of a through wall crack in a penetration at Bugey 3 in France in 1991. The predominant form of PWSCC cracking discovered in the Alloy 600 nozzles between 1991 and the recent events had been axial cracks initiated on the inner surface of the penetration tubes. This type of cracking had been addressed through an industry program in response to Generic Letter 97-01, and included a series of lead plant inspections which were being carried out over several years. Most of the cracks at Oconee and ANO 1, however, appeared to originate on the outside surface of the stub of the penetration tube extending below the J-groove weld, or in the weld itself. Circumferential cracking above the nozzle attachment weld was discovered on four nozzles at two of the Oconee units. During inspections in the Fall 2001 outages, additional leaking or cracked nozzles were discovered at several other plants. The models that had been used to rank the susceptibility of the plants to ID initiated flaws needed revision to account for the new phenomena. Additionally, the presence of circumferential cracks above the attachment weld presented the potential safety concerns of rod ejection and small break LOCA. Also, the NDE techniques that had been developed and qualified for the ID initiated flaws would be unable to detect the OD initiated flaws, so new NDE techniques and delivery capabilities were needed. Finally the repairs required for the Oconee and ANO flaws were extremely costly and dose intensive. Therefore, new repair and/or mitigation methods and delivery techniques were needed. The MRP program was established to address these areas and has evolved significantly as more information has become available. It includes activities in assessment and management of the issue, inspection capability, and repair and mitigation. Because of the safety implications of the circumferential cracking, the Nuclear Regulatory Commission issued NRC Bulletin 2001–01 on August 3, 2001. The MRP program also included a generic submittal to assist utilities in responding to the bulletin. Long term activities to provide utilities with appropriate tools for managing the PWSCC of reactor head penetrations are planned.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Fernando Plácido Da Conceição, Vítor, Rafaela Marques, Pedro Água e Joakim Dahlman. "Ecological Collaborative Support System for maritime navigation teams". In 5th International Conference on Human Systems Engineering and Design: Future Trends and Applications (IHSED 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1004124.

Testo completo
Abstract (sommario):
Maritime navigation is a demanding and complex domain that involves risks for people, the environment, and economic activity. The tasks associated with its execution require advanced training, expertise, experience, and a collaborative Navigation Team. Furthermore, naval operations demand higher readiness, accuracy, and resilience due to additional constraints. The response to these challenges has been integrating further automation and information systems. However, the effectiveness of innovative trends had been questioned by recent naval accidents like those involving the US and Norwegian naval ships.In bridge crews, collaboration is progressively more dependent on technological means since they are the information sources, and team members need to share and exchange different information formats besides audio. Furthermore, the increasing number of control functions and information systems required to strengthen the bridge situational awareness came with an additional cost to human operators. Therefore, navigation teams need further assistance in this challenging context to achieve a consistent and coherent situational awareness regarding the integrated systems in use, comprising technological and human agents' activities. The proposed solution under development is a Collaborative Decision Support System (C-DSS) fitted to the vessels' bridge systems requirements to reduce the cognitive workload, enhance collaboration between team members and information systems, and strengthen team situational awareness and sensemaking.Several studies addressed the need to provide enhanced interfaces with higher levels of abstraction representation, adjusted to the changed role of human operators, easily adaptable; improved collaboration between humans and automated agents, and superior information integration from internal and external environments. The most critical property of interfaces is to simplify the "discovery of the meaningfulness" of the problem space. World's representation should include the relevant and critical elements tailored to the task, augmenting the interaction experience, increasing the decision-making skill, and assisting the discovery of significant phenomena. The used methodology was an anthropocentric approach to innovation - design thinking. The process was performed with five phases: empathy, definition, idealization, prototyping and tests. Interface design prototypes were made with Mockups, covering the following several team roles. Usability tests, questionnaires and interviews were applied to validate and assess the C-DSS. Five focus group tests were made iteratively, with fifteen SMEs, twice with navigators, and once with SMEs from the other role, three in each iterative evaluation test, with a 1.5-hour duration. Following a snowball selection principle, participants were recruited from the Portuguese navy with the organization's guidance to ensure that all participants had an extensive seagoing experience.At the current stage of the C-DSS development, the results indicate significant potential for interface strategies. Results show that end-users would like to have the C-DSS, considering it innovative, friendly, easy to learn and with the information they need. The usability test allowed us to correct and improve numerous user interface design issues. The main difficulties maintained in terms of usability were related to recording data. The envisaged C-DSS is fitted to the vessels' bridge systems requirements embracing several prerequisites like being portable and customizable, enabling goals and priorities' management, logging performance and behavioural data, sharing different information formats, supporting information synchronization, providing situational awareness information about the system and operators.This study contributes to the understanding of the collaborative decision-making process in navigation teams through two objectives: first, systematising the main difficulties and challenges and, second, presenting a desirable solution, possible from a technological and financially viable point of view. The developed prototype has four distinct graphic interfaces, that complement each other and are oriented to the context of the user's role, based on the continuous contribution of target users, that is, elements belonging to navigation teams. The contributions allowed an improved understanding of the problem, idealise the solution, and improve the C-DSS, from design to insertion and adaptation of new functions.In the validation process of the prototype, it was found that the experts would like to use the C-DSS, stating that they would have greater autonomy and, even so, would be able to make an exceptional contribution to the team. Finally, the design thinking approach provided a basis for continuous feedback from end-users, becoming a twofold benefit by triggering new ideas of possible solutions to be deployed onboard.
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Task Group on Discovery and Metadata"

1

Corriveau, L., J. F. Montreuil, O. Blein, E. Potter, M. Ansari, J. Craven, R. Enkin et al. Metasomatic iron and alkali calcic (MIAC) system frameworks: a TGI-6 task force to help de-risk exploration for IOCG, IOA and affiliated primary critical metal deposits. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/329093.

Testo completo
Abstract (sommario):
Australia's and China's resources (e.g. Olympic Dam Cu-U-Au-Ag and Bayan Obo REE deposits) highlight how discovery and mining of iron oxide copper-gold (IOCG), iron oxide±apatite (IOA) and affiliated primary critical metal deposits in metasomatic iron and alkali-calcic (MIAC) mineral systems can secure a long-term supply of critical metals for Canada and its partners. In Canada, MIAC systems comprise a wide range of undeveloped primary critical metal deposits (e.g. NWT NICO Au-Co-Bi-Cu and Québec HREE-rich Josette deposits). Underexplored settings are parts of metallogenic belts that extend into Australia and the USA. Some settings, such as the Camsell River district explored by the Dene First Nations in the NWT, have infrastructures and 100s of km of historic drill cores. Yet vocabularies for mapping MIAC systems are scanty. Ability to identify metasomatic vectors to ore is fledging. Deposit models based on host rock types, structural controls or metal associations underpin the identification of MIAC-affinities, assessment of systems' full mineral potential and development of robust mineral exploration strategies. This workshop presentation reviews public geoscience research and tools developed by the Targeted Geoscience Initiative to establish the MIAC frameworks of prospective Canadian settings and global mining districts and help de-risk exploration for IOCG, IOA and affiliated primary critical metal deposits. The knowledge also supports fundamental research, environmental baseline assessment and societal decisions. It fulfills objectives of the Canadian Mineral and Metal Plan and the Critical Mineral Mapping Initiative among others. The GSC-led MIAC research team comprises members of the academic, private and public sectors from Canada, Australia, Europe, USA, China and Dene First Nations. The team's novel alteration mapping protocols, geological, mineralogical, geochemical and geophysical framework tools, and holistic mineral systems and petrophysics models mitigate and solve some of the exploration and geosciences challenges posed by the intricacies of MIAC systems. The group pioneers the use of discriminant alteration diagrams and barcodes, the assembly of a vocab for mapping and core logging, and the provision of field short courses, atlas, photo collections and system-scale field, geochemical, rock physical properties and geophysical datasets are in progress to synthesize shared signatures of Canadian settings and global MIAC mining districts. Research on a metamorphosed MIAC system and metamorphic phase equilibria modelling of alteration facies will provide a foundation for framework mapping and exploration of high-grade metamorphic terranes where surface and near surface resources are still to be discovered and mined as are those of non-metamorphosed MIAC systems.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

ARL/CARL Task Force on Marrakesh Treaty Implementation: Final Report / Groupe de travail de l’ARL-ABRC sur la mise en oeuvre du Traité de Marrakech. Association of Research Libraries and Canadian Association of Research Libraries, febbraio 2024. http://dx.doi.org/10.29242/report.marrakesh2023.

Testo completo
Abstract (sommario):
This report published by the Association of Research Libraries (ARL) and the Canadian Association of Research Libraries (CARL) summarizes recommendations for libraries in each of the areas explored by the ARL/CARL Task Force on Marrakesh Treaty Implementation. The report also includes recommendations for ARL and CARL—each Association’s respective committees will carry forth these recommendations. Through a three-year pilot project, the ARL/CARL task force explored elements of Marrakesh Treaty implementation in the US and Canada, focusing on what was required to enable scholars’ unfettered access to materials in accessible formats in their fields of scholarship and preferred languages. The ARL/CARL pilot investigated several aspects of Marrakesh Treaty implementation, including identifying beneficiary needs within a university setting, identifying and implementing metadata requirements for searching capabilities, implementing the discovery systems within the pilot libraries, and developing strategies and opportunities for the pilot project members to socialize the work being done.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia