Journal articles on the topic 'Task Group on Discovery and Metadata'

To see the other types of publications on this topic, follow the link: Task Group on Discovery and Metadata.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Task Group on Discovery and Metadata.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Turp, Clara, Lee Wilson, Julienne Pascoe, and Alex Garnett. "The Fast and the FRDR: Improving Metadata for Data Discovery in Canada." Publications 8, no. 2 (May 2, 2020): 25. http://dx.doi.org/10.3390/publications8020025.

Full text
Abstract:
The Federated Research Data Repository (FRDR), developed through a partnership between the Canadian Association of Research Libraries’ Portage initiative and the Compute Canada Federation, improves research data discovery in Canada by providing a single search portal for research data stored across Canadian governmental, institutional, and discipline-specific data repositories. While this national discovery layer helps to de-silo Canadian research data, challenges in data discovery remain due to a lack of standardized metadata practices across repositories. In recognition of this challenge, a Portage task group, drawn from a national network of experts, has engaged in a project to map subject keywords to the Online Computer Library Center’s (OCLC) Faceted Application of Subject Terminology (FAST) using the open source OpenRefine software. This paper will describe the task group’s project, discuss the various approaches undertaken by the group, and explore how this work improves data discovery and may be adopted by other repositories and metadata aggregators to support metadata standardization.
APA, Harvard, Vancouver, ISO, and other styles
2

Hoarfrost, Adrienne, Nick Brown, C. Titus Brown, and Carol Arnosti. "Sequencing data discovery with MetaSeek." Bioinformatics 35, no. 22 (June 21, 2019): 4857–59. http://dx.doi.org/10.1093/bioinformatics/btz499.

Full text
Abstract:
Abstract Summary Sequencing data resources have increased exponentially in recent years, as has interest in large-scale meta-analyses of integrated next-generation sequencing datasets. However, curation of integrated datasets that match a user’s particular research priorities is currently a time-intensive and imprecise task. MetaSeek is a sequencing data discovery tool that enables users to flexibly search and filter on any metadata field to quickly find the sequencing datasets that meet their needs. MetaSeek automatically scrapes metadata from all publicly available datasets in the Sequence Read Archive, cleans and parses messy, user-provided metadata into a structured, standard-compliant database and predicts missing fields where possible. MetaSeek provides a web-based graphical user interface and interactive visualization dashboard, as well as a programmatic API to rapidly search, filter, visualize, save, share and download matching sequencing metadata. Availability and implementation The MetaSeek online interface is available at https://www.metaseek.cloud/. The MetaSeek database can also be accessed via API to programmatically search, filter and download all metadata. MetaSeek source code, metadata scrapers and documents are available at https://github.com/MetaSeek-Sequencing-Data-Discovery/metaseek/.
APA, Harvard, Vancouver, ISO, and other styles
3

Hagen, Brianne. "Book Review: Managing Metadata in Web-scale Discovery Systems." Library Resources & Technical Services 61, no. 3 (July 14, 2017): 172. http://dx.doi.org/10.5860/lrts.61n3.172.

Full text
Abstract:
Managing metadata in libraries today presents challenges to information professionals concerned with quality control, providing relevant search results, and taming the volume of items available for access in a web-scale discovery system. No longer are libraries limited to the collections they “own.” Catalogers and metadata professionals now assume the responsibility of providing access to millions of resources, often with limitations on who can access that resource. Relationships with vendors provide opportunities to help manage the gargantuan scale of information. Of course those opportunities come with their own problems as relationships among vendors can be contentious, leaving metadata managers to figure out quality control on a grand scale. In addition to this politicized information landscape, new ways of managing and creating metadata are emerging, leaving information professionals with the task of managing multiple schema in different formats. The essays in Managing Metadata in Web-scale Discovery Systems seek to address issues in managing the large scale of information overwhelming catalogers today, with potential solutions for taming the beast of exponentially increasing data.
APA, Harvard, Vancouver, ISO, and other styles
4

Miles, Simon, Juri Papay, Terry Payne, Michael Luck, and Luc Moreau. "Towards a Protocol for the Attachment of Metadata to Grid Service Descriptions and Its Use in Semantic Discovery." Scientific Programming 12, no. 4 (2004): 201–11. http://dx.doi.org/10.1155/2004/170481.

Full text
Abstract:
Service discovery in large scale, open distributed systems is difficult because of the need to filter out services suitable to the task at hand from a potentially huge pool of possibilities. Semantic descriptions have been advocated as the key to expressive service discovery, but the most commonly used service descriptions and registry protocols do not support such descriptions in a general manner. In this paper, we present a protocol, its implementation and an api for registering semantic service descriptions and other task/user-specific metadata, and for discovering services according to these. Our approach is based on a mechanism for attaching structured and unstructured metadata, which we show to be applicable to multiple registry technologies. The result is an extremely flexible service registry that can be the basis of a sophisticated semantically-enhanced service discovery engine, an essential component of a Semantic Grid.
APA, Harvard, Vancouver, ISO, and other styles
5

Michel, Franck, and The Bioschemas Community. "Bioschemas & Schema.org: a Lightweight Semantic Layer for Life Sciences Websites." Biodiversity Information Science and Standards 2 (May 22, 2018): e25836. http://dx.doi.org/10.3897/biss.2.25836.

Full text
Abstract:
Web portals are commonly used to expose and share scientific data. They enable end users to find, organize and obtain data relevant to their interests. With the continuous growth of data across all science domains, researchers commonly find themselves overwhelmed as finding, retrieving and making sense of data becomes increasingly difficult. Search engines can help find relevant websites, but the short summarizations they provide in results lists are often little informative on how relevant a website is with respect to research interests. To yield better results, a strategy adopted by Google, Yahoo, Yandex and Bing involves consuming structured content that they extract from websites. Towards this end, the schema.org collaborative community defines vocabularies covering common entities and relationships (e.g., events, organizations, creative works) (Guha et al. 2016). Websites can leverage these vocabularies to embed semantic annotations within web pages, in the form of markup using standard formats. Search engines, in turn, exploit semantic markup to enhance the ranking of most relevant resources while providing more informative and accurate summarization. Additionally, adding such rich metadata is a step forward to make data FAIR, i.e. Findable, Accessible, Interoperable and Reusable. Although schema.org encompasses terms related to data repositories, datasets, citations, events, etc., it lacks specialized terms for modeling research entities. The Bioschemas community (Garcia et al. 2017) aims to extend schema.org to support markup for Life Sciences websites. A major pillar lies in reusing types from schema.org as well as well-adopted domain ontologies, while only proposing a limited set of new types. The goal is to enable semantic cross-linking between knowledge graphs extracted from marked-up websites. An overview of the main types is presented in Fig. 1. Bioschemas also provides profiles that specify how to describe an entity of some type. For instance, the protein profile requires a unique identifier, recommends to list transcribed genes and associated diseases, and points to recommended terms from the Protein Ontology and Semantic Science Integrated Ontology. The success of schema.org lies in its simplicity and the support by major search engines. By extending schema.org, Bioschemas enables life sciences research communities to benefit from a lightweight semantic layer on websites and thus facilitates discoverability and interoperability across them. From an initial pilot including just a few bio-types such as proteins and samples, the Bioschemas community has grown and is now opening up towards other disciplines. The biodiversity domain is a promising candidate for such further extensions. We can think of additional profiles to account for biodiversity-related information. For instance, since taxonomic registers are the backbone of many web portals and databases, new profiles could describe taxa and scientific names while reusing well-adopted vocabularies such as Darwin Core terms (Baskauf et al. 2016) or TDWG ontologies (TDWG Vocabulary Management Task Group 2013). Fostering the use of such markup by web portals reporting traits, observations or museum collections could not only improve information discovery using search engines, but could also be a key to spur large-scale biodiversity data integration scenarios.
APA, Harvard, Vancouver, ISO, and other styles
6

Williamschen, Jodi. "Work in Progress: The PCC Task Group on Metadata Application Profiles." Cataloging & Classification Quarterly 58, no. 3-4 (January 30, 2020): 458–63. http://dx.doi.org/10.1080/01639374.2020.1717708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Evans, Bruce J., Karen Snow, Elizabeth Shoemaker, Maurine McCourry, Allison Yanos, Jennifer A. Liss, and Susan Rathbun-Grubb. "Competencies through Community Engagement: Developing the Core Competencies for Cataloging and Metadata Professional Librarians." Library Resources & Technical Services 62, no. 4 (October 3, 2018): 188. http://dx.doi.org/10.5860/lrts.62n4.188.

Full text
Abstract:
In 2015 the Association for Library Collections and Technical Services Cataloging and Metadata Management Section (ALCTS CaMMS) Competencies for a Career in Cataloging Interest Group (CECCIG) charged a task force to create a core competencies document for catalogers. The process leading to the final document, the Core Competencies for Cataloging and Metadata Professional Librarians, involved researching the use of competencies documents, envisioning an accessible final product, and engaging in collaborative writing. Additionally, the task force took certain measures to solicit and incorporate feedback from the cataloging community throughout the entire process. The Competencies document was approved by the ALCTS Board of Directors in January 2017. Task force members who were involved in the final stages of the document’s creation detail their processes and purposes in this paper and provide recommendations for groups approaching similar tasks.
APA, Harvard, Vancouver, ISO, and other styles
8

Ocvirk, Pierre, Gilles Landais, Laurent Michel, Heddy Arab, Sylvain Guehenneux, Thomas Boch, Marianne Brouty, et al. "Associated data: Indexation, discovery, challenges and roles." EPJ Web of Conferences 186 (2018): 02002. http://dx.doi.org/10.1051/epjconf/201818602002.

Full text
Abstract:
Astronomers are nowadays required by their funding agencies to make the data obtained through public-financed means (ground and space observatories and labs) available to the public and the community at large. This is a fundamental step in enabling the open science paradigm the astronomical community is striving for. In other words, tabular data (catalogs) arriving to CDS for ingestion into its databases, in particular VizieR, is more and more frequently accompanied by the reduced observed dataset (spectra, images, data cubes, time series). While the benefits of making this associated data available are obvious, the task is very challenging: in this context "big data" takes the meaning of "extremely heterogeneous data", with a diversity of formats and practices among astronomers, even within the FITS standard. Providing librarians with efficient tools to index this data and generate the relevant metadata is therefore paramount.
APA, Harvard, Vancouver, ISO, and other styles
9

Su, Shian, Vincent J. Carey, Lori Shepherd, Matthew Ritchie, Martin T. Morgan, and Sean Davis. "BiocPkgTools: Toolkit for mining the Bioconductor package ecosystem." F1000Research 8 (May 29, 2019): 752. http://dx.doi.org/10.12688/f1000research.19410.1.

Full text
Abstract:
Motivation: The Bioconductor project, a large collection of open source software for the comprehension of large-scale biological data, continues to grow with new packages added each week, motivating the development of software tools focused on exposing package metadata to developers and users. The resulting BiocPkgTools package facilitates access to extensive metadata in computable form covering the Bioconductor package ecosystem, facilitating downstream applications such as custom reporting, data and text mining of Bioconductor package text descriptions, graph analytics over package dependencies, and custom search approaches. Results: The BiocPkgTools package has been incorporated into the Bioconductor project, installs using standard procedures, and runs on any system supporting R. It provides functions to load detailed package metadata, longitudinal package download statistics, package dependencies, and Bioconductor build reports, all in "tidy data" form. BiocPkgTools can convert from tidy data structures to graph structures, enabling graph-based analytics and visualization. An end-user-friendly graphical package explorer aids in task-centric package discovery. Full documentation and example use cases are included. Availability: The BiocPkgTools software and complete documentation are available from Bioconductor (https://bioconductor.org/packages/BiocPkgTools).
APA, Harvard, Vancouver, ISO, and other styles
10

Kaiser, Kathryn A., John Chodacki, Ted Habermann, Jennifer Kemp, Laura Paglione, Michelle Urberg, and T. Scott Plutchak. "Metadata: The accelerant we need." Information Services & Use 40, no. 3 (November 10, 2020): 181–91. http://dx.doi.org/10.3233/isu-200094.

Full text
Abstract:
Large-scale pandemic events have sent scientific communities scrambling to gather and analyze data to provide governments and policy makers with information to inform decisions and policies needed when imperfect information is all that may be available. Historical records from the 1918 influenza pandemic reflect how little improvement has been made in how government and policy responses are formed when large scale threats occur, such as the COVID-19 pandemic. This commentary discusses three examples of how metadata improvements are being, or may be made, to facilitate gathering and assessment of data to better understand complex and dynamic situations. In particular, metadata strategies can be applied in advance, on the fly or even after events to integrate and enrich perspectives that aid in creating balanced actions to minimize impacts with lowered risk of unintended consequences. Metadata can enhance scope, speed and clarity with which scholarly communities can curate their outputs for optimal discovery and reuse. Conclusions are framed within the Metadata 2020 working group activities that lay a foundation for advancement of scholarly communications to better serve all communities.
APA, Harvard, Vancouver, ISO, and other styles
11

Dumontier, Michel, Alasdair J. G. Gray, M. Scott Marshall, Vladimir Alexiev, Peter Ansell, Gary Bader, Joachim Baran, et al. "The health care and life sciences community profile for dataset descriptions." PeerJ 4 (August 16, 2016): e2331. http://dx.doi.org/10.7717/peerj.2331.

Full text
Abstract:
Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG) identified Resource Description Framework (RDF) vocabularies that could be used to specify common metadata elements and their value sets. The resulting guideline covers elements of description, identification, attribution, versioning, provenance, and content summarization. This guideline reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets.
APA, Harvard, Vancouver, ISO, and other styles
12

Ball, Alexander, Sean Chen, Jane Greenberg, Cristina Perez, Keith Jeffery, and Rebecca Koskela. "Building a Disciplinary Metadata Standards Directory." International Journal of Digital Curation 9, no. 1 (June 17, 2014): 142–51. http://dx.doi.org/10.2218/ijdc.v9i1.308.

Full text
Abstract:
The Research Data Alliance (RDA) Metadata Standards Directory Working Group (MSDWG) is building a directory of descriptive, discipline-specific metadata standards. The purpose of the directory is to promote the discovery, access and use of such standards, thereby improving the state of research data interoperability and reducing duplicative standards development work.This work builds upon the UK Digital Curation Centre's Disciplinary Metadata Catalogue, a resource created with much the same aim in mind. The first stage of the MSDWG's work was to update and extend the information contained in the catalogue. In the current, second stage, a new platform is being developed in order to extend the functionality of the directory beyond that of the catalogue, and to make it easier to maintain and sustain. Future work will include making the directory more amenable to use by automated tools.
APA, Harvard, Vancouver, ISO, and other styles
13

Ben Seghir, Nadia, Okba Kazar, Khaled Rezeg, and Samir Bourekkache. "A semantic web services discovery approach based on a mobile agent using metadata." International Journal of Intelligent Computing and Cybernetics 10, no. 1 (March 13, 2017): 12–29. http://dx.doi.org/10.1108/ijicc-02-2015-0006.

Full text
Abstract:
Purpose The success of web services involved the adoption of this technology by different service providers through the web, which increased the number of web services, as a result making their discovery a tedious task. The UDDI standard has been proposed for web service publication and discovery. However, it lacks sufficient semantic description in the content of web services, which makes it difficult to find and compose suitable web services during the analysis, search, and matching processes. In addition, few works on semantic web services discovery take into account the user’s profile. The purpose of this paper is to optimize the web services discovery by reducing the search space and increasing the number of relevant services. Design/methodology/approach The authors propose a new approach for the semantic web services discovery based on the mobile agent, user profile and metadata catalog. In the approach, each user can be described by a profile which is represented in two dimensions: personal dimension and preferences dimension. The description of web service is based on two levels: metadata catalog and WSDL. Findings First, the semantic web services discovery reduces the number of relevant services through the application of matching algorithm “semantic match”. The result of this first matching restricts the search space at the level of UDDI registry, which allows the users to have good results for the “functional match”. Second, the use of mobile agents as a communication entity reduces the traffic on the network and the quantity of exchanged information. Finally, the integration of user profile in the service discovery process facilitates the expression of the user needs and makes intelligible the selected service. Originality/value To the best knowledge of the authors, this is the first attempt at implementing the mobile agent technology with the semantic web service technology.
APA, Harvard, Vancouver, ISO, and other styles
14

Blundell, Jon. "Managing 3D Collections Data: Developing Systems and Metadata for 3D Digitization at Scale." Biodiversity Information Science and Standards 2 (June 15, 2018): e26704. http://dx.doi.org/10.3897/biss.2.26704.

Full text
Abstract:
As 3D digitization becomes more common in collections documentation and research, there is a growing need for tools which address the special needs of 3D data stewardship. Systems are needed to manage both the scan data collected during digitization activities, as well as the 3D models generated from that data. These systems need to be able to preserve and make transparent the complex relationships inherent in the data created from 3D digitization activities. They need to connect digital surrogates back to the objects they represent as well as provide an easy way to discover and retrieve that data for research, conservation, and public access. At the core of such systems there needs to be metadata models that can account for the intricacies and specific needs of managing 3D data. This year, the Smithsonian Institution will be deploying new infrastructure which does just that, based on a metadata model developed by a cross disciplinary working group comprised of content experts from across the institution. The platform, which not only manages scan data, but also automates the processing and delivery of 3D digitized content, is open source and is built around modular design principles for easier adoption. This talk builds upon last year’s SPNHC presentation “Automating 3D collection capture: Developing systems for 3D digitization at scale” as it addresses the information systems and infrastructure needed to support the management and delivery of 3D data at scale. We will cover the basic functionality of the Smithsonian’s 3D data repository, how it facilitates data administration, the workflows involved in managing and processing data, and how it connects to the larger Smithsonian infrastructure. As part of this, we will explore the metadata model behind the system and how the model can support greater usability and transparency when sharing and working with 3D scan data.
APA, Harvard, Vancouver, ISO, and other styles
15

Kopsachilis, Vasilis, and Michail Vaitis. "GeoLOD: A Spatial Linked Data Catalog and Recommender." Big Data and Cognitive Computing 5, no. 2 (April 19, 2021): 17. http://dx.doi.org/10.3390/bdcc5020017.

Full text
Abstract:
The increasing availability of linked data poses new challenges for the identification and retrieval of the most appropriate data sources that meet user needs. Recent dataset catalogs and recommenders provide advanced methods that facilitate linked data search, but none exploits the spatial characteristics of datasets. In this paper, we present GeoLOD, a web catalog of spatial datasets and classes and a recommender for spatial datasets and classes possibly relevant for link discovery processes. GeoLOD Catalog parses, maintains and generates metadata about datasets and classes provided by SPARQL endpoints that contain georeferenced point instances. It offers text and map-based search functionality and dataset descriptions in GeoVoID, a spatial dataset metadata template that extends VoID. GeoLOD Recommender pre-computes and maintains, for all identified spatial classes in the Web of Data (WoD), ranked lists of classes relevant for link discovery. In addition, the on-the-fly Recommender allows users to define an uncatalogued SPARQL endpoint, a GeoJSON or a Shapefile and get class recommendations in real time. Furthermore, generated recommendations can be automatically exported in SILK and LIMES configuration files in order to be used for a link discovery task. In the results, we provide statistics about the status and potential connectivity of spatial datasets in the WoD, we assess the applicability of the recommender, and we present the outcome of a system usability study. GeoLOD is the first catalog that targets both linked data experts and geographic information systems professionals, exploits geographical characteristics of datasets and provides an exhaustive list of WoD spatial datasets and classes along with class recommendations for link discovery.
APA, Harvard, Vancouver, ISO, and other styles
16

Guay, Beth. "A Case Study on the Path to Resource Discovery." Information Technology and Libraries 36, no. 3 (September 17, 2017): 18–47. http://dx.doi.org/10.6017/ital.v36i3.9966.

Full text
Abstract:
A meeting in April 2015 explored the potential withdrawal of valuable collections of microfilm held by the University of Maryland, College Park Libraries. This resulted in a project to identify OCLC record numbers (OCN) for addition to OCLC’s Chadwyck-Healey Early English Books Online (EEBO) KBART file.[i] Initially, the project was an attempt to adapt cataloging workflows to a new environment in which the copy cataloging of e-resources takes place within discovery system tools rather than traditional cataloging utilities and MARC record set or individual record downloads into online catalogs. In the course of the project, it was discovered that the microfilm and e-version bibliographic records contained metadata which had not been utilized by OCLC to improve its link resolution and discovery services for digitized versions of the microfilm resources. This metadata may be advantageous to OCLC and to others in their work to transition from MARC to linked data on the Semantic Web. With MARC record field indexing and linked data implementations, this collection and others could better support scholarly research.[i] A KBART file is a file compliant with the NISO recommended practice, Knowledge Bases and Related Tools (KBART). See KBART Phase II Working Group, Knowledge Bases and Related Tools (KBART): Recommended Practice: NISO RP-9-2014 (Baltimore, MD: National Information Standards Organization (NISO), 2014), accessed March 14, 2017, http://www.niso.org/publications/rp/rp-9-2014/.
APA, Harvard, Vancouver, ISO, and other styles
17

Dass, Gaurhari, Manh-Tu Vu, Pan Xu, Enrique Audain, Marc-Phillip Hitz, Björn A. Grüning, Henning Hermjakob, and Yasset Perez-Riverol. "The omics discovery REST interface." Nucleic Acids Research 48, W1 (May 6, 2020): W380—W384. http://dx.doi.org/10.1093/nar/gkaa326.

Full text
Abstract:
Abstract The Omics Discovery Index is an open source platform that can be used to access, discover and disseminate omics datasets. OmicsDI integrates proteomics, genomics, metabolomics, models and transcriptomics datasets. Using an efficient indexing system, OmicsDI integrates different biological entities including genes, transcripts, proteins, metabolites and the corresponding publications from PubMed. In addition, it implements a group of pipelines to estimate the impact of each dataset by tracing the number of citations, reanalysis and biological entities reported by each dataset. Here, we present the OmicsDI REST interface (www.omicsdi.org/ws/) to enable programmatic access to any dataset in OmicsDI or all the datasets for a specific provider (database). Clients can perform queries on the API using different metadata information such as sample details (species, tissues, etc), instrumentation (mass spectrometer, sequencer), keywords and other provided annotations. In addition, we present two different libraries in R and Python to facilitate the development of tools that can programmatically interact with the OmicsDI REST interface.
APA, Harvard, Vancouver, ISO, and other styles
18

Helal, Ahmed, Mossad Helali, Khaled Ammar, and Essam Mansour. "A demonstration of KGLac." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 2675–78. http://dx.doi.org/10.14778/3476311.3476317.

Full text
Abstract:
Data science growing success relies on knowing where a relevant dataset exists, understanding its impact on a specific task, finding ways to enrich a dataset, and leveraging insights derived from it. With the growth of open data initiatives, data scientists need an extensible set of effective discovery operations to find relevant data from their enterprise datasets accessible via data discovery systems or open datasets accessible via data portals. Existing portals and systems suffer from limited discovery support and do not track the use of a dataset and insights derived from it. We will demonstrate KGLac, a system that captures metadata and semantics of datasets to construct a knowledge graph (GLac) interconnecting data items, e.g., tables and columns. KGLac supports various data discovery operations via SPARQL queries for table discovery, unionable and joinable tables, plus annotation with related derived insights. We harness a broad range of Machine Learning (ML) approaches with GLac to enable automatic graph learning for advanced and semantic data discovery. The demo will showcase how KGLac facilitates data discovery and enrichment while developing an ML pipeline to evaluate potential gender salary bias in IT jobs.
APA, Harvard, Vancouver, ISO, and other styles
19

Oliver, Chris. "Identifying Resources: FRBR and Accessibility." Scientific and Technical Libraries, no. 7 (July 1, 2017): 42–54. http://dx.doi.org/10.33186/1027-3689-2017-7-42-54.

Full text
Abstract:
This paper will outline some of the key aspects of the FRBR family of conceptual models that support resource discovery especially for persons who are blind, visually impaired, or otherwise print disabled. The FRBR family of models have had a significant influence on the ways in which communities around the globe perceive and understand the bibliographic universe. This paper will focus on two areas where the conceptual models have had an important impact: bibliographic information as data and the precise delineation between content and carrier. The paper focuses on these two areas because they are of particular interest for a user with a print disability who approaches the task of discovering an appropriate resource. FRBR modeling, as expressed in the original models or in the new consolidated model, FRBR-LRM, offers a roadmap for structuring metadata in ways that allow more options for resource discovery in an increasingly global context.
APA, Harvard, Vancouver, ISO, and other styles
20

Leng, Chew Bee, Kamsiah Mohd Ali, and Ch’ng Eng Hoo. "Open access repositories on open educational resources." Asian Association of Open Universities Journal 11, no. 1 (August 1, 2016): 35–49. http://dx.doi.org/10.1108/aaouj-06-2016-0005.

Full text
Abstract:
Purpose Triggered by the advancement of information and communications technology, open access repositories (a variant of digital libraries) is one of the important changes impacting library services. In the context of openness to a wider community to access free resources, Wawasan Open University Library initiated a research project to build open access repositories on open educational resources. Open educational resources (OER) is an area of a multifaceted open movement in education. The purpose of this paper is to show how two web portal repositories on OER materials were developed adopting a Japanese open source software, called WEKO. Design/methodology/approach The design approach is based on a pull to push strategy whereby metadata of scholarly open access materials kept within the institution and network communities’ digital databases were harvested using the Open Archives Initiatives Protocol for Metadata Harvesting method into another open knowledge platform for discovery by other users. Findings Positive results emanating from the university open access repositories development showed how it strengthen the role of the librarian as manager of institutional assets and successfully making the content freely available from this open knowledge platform for reuse in learning and teaching. Research limitations/implications Developing further programmes to encourage, influence faculty members and prospective stakeholders to use and contribute content to the valuable repositories is indeed a challenging task. Originality/value This paper provides insight for academic libraries on how open access repositories development and metadata analysis can enhance new professional challenges for information professionals in the field of data management, data quality and intricacies of supporting data repositories and build new open models of collaboration across institutions and libraries. This paper also describes future collaboration work with institutions in sharing their open access resources.
APA, Harvard, Vancouver, ISO, and other styles
21

Othman, Houcemeddine, Lyndon Zass, Jorge E. B. da Rocha, Fouzia Radouani, Chaimae Samtal, Ichrak Benamri, Judit Kumuthini, et al. "African Genomic Medicine Portal: A Web Portal for Biomedical Applications." Journal of Personalized Medicine 12, no. 2 (February 11, 2022): 265. http://dx.doi.org/10.3390/jpm12020265.

Full text
Abstract:
Genomics data are currently being produced at unprecedented rates, resulting in increased knowledge discovery and submission to public data repositories. Despite these advances, genomic information on African-ancestry populations remains significantly low compared with European- and Asian-ancestry populations. This information is typically segmented across several different biomedical data repositories, which often lack sufficient fine-grained structure and annotation to account for the diversity of African populations, leading to many challenges related to the retrieval, representation and findability of such information. To overcome these challenges, we developed the African Genomic Medicine Portal (AGMP), a database that contains metadata on genomic medicine studies conducted on African-ancestry populations. The metadata is curated from two public databases related to genomic medicine, PharmGKB and DisGeNET. The metadata retrieved from these source databases were limited to genomic variants that were associated with disease aetiology or treatment in the context of African-ancestry populations. Over 2000 variants relevant to populations of African ancestry were retrieved. Subsequently, domain experts curated and annotated additional information associated with the studies that reported the variants, including geographical origin, ethnolinguistic group, level of association significance and other relevant study information, such as study design and sample size, where available. The AGMP functions as a dedicated resource through which to access African-specific information on genomics as applied to health research, through querying variants, genes, diseases and drugs. The portal and its corresponding technical documentation, implementation code and content are publicly available.
APA, Harvard, Vancouver, ISO, and other styles
22

Cabanac, Guillaume, Theodora Oikonomidi, and Isabelle Boutron. "Day-to-day discovery of preprint–publication links." Scientometrics 126, no. 6 (April 18, 2021): 5285–304. http://dx.doi.org/10.1007/s11192-021-03900-7.

Full text
Abstract:
AbstractPreprints promote the open and fast communication of non-peer reviewed work. Once a preprint is published in a peer-reviewed venue, the preprint server updates its web page: a prominent hyperlink leading to the newly published work is added. Linking preprints to publications is of utmost importance as it provides readers with the latest version of a now certified work. Yet leading preprint servers fail to identify all existing preprint–publication links. This limitation calls for a more thorough approach to this critical information retrieval task: overlooking published evidence translates into partial and even inaccurate systematic reviews on health-related issues, for instance. We designed an algorithm leveraging the Crossref public and free source of bibliographic metadata to comb the literature for preprint–publication links. We tested it on a reference preprint set identified and curated for a living systematic review on interventions for preventing and treating COVID-19 performed by international collaboration: the COVID-NMA initiative (covid-nma.com). The reference set comprised 343 preprints, 121 of which appeared as a publication in a peer-reviewed journal. While the preprint servers identified 39.7% of the preprint–publication links, our linker identified 90.9% of the expected links with no clues taken from the preprint servers. The accuracy of the proposed linker is 91.5% on this reference set, with 90.9% sensitivity and 91.9% specificity. This is a 16.26% increase in accuracy compared to that of preprint servers. We release this software as supplementary material to foster its integration into preprint servers’ workflows and enhance a daily preprint–publication chase that is useful to all readers, including systematic reviewers. This preprint–publication linker currently provides day-to-day updates to the biomedical experts of the COVID-NMA initiative.
APA, Harvard, Vancouver, ISO, and other styles
23

Veena, S. T., and A. Selvaraj. "Forensic steganalysis for identification of steganography software tools using multiple format image." International Journal of Informatics and Communication Technology (IJ-ICT) 10, no. 3 (December 1, 2021): 188. http://dx.doi.org/10.11591/ijict.v10i3.pp188-197.

Full text
Abstract:
<p>Today many steganographic software tools are freely available on the Internet, which helps even callow users to have covert communication through digital images. Targeted structural image steganalysers identify only a particular steganographic software tool by tracing the unique fingerprint left in the stego images by the steganographic process. Image steganalysis proves to be a tough challenging task if the process is blind and universal, the secret payload is very less and the cover image is in lossless compression format. A payload independent universal steganalyser which identifies the steganographic software tools by exploiting the traces of artefacts left in the image and in its metadata for five different image formats is proposed. First, the artefacts in image metadata are identified and clustered to form distinct groups by extended K-means clustering. The group that is identical to the cover is further processed by extracting the artefacts in the image data. This is done by developing a signature of the steganographic software tool from its stego images. They are then matched for steganographic software tool identification. Thus, the steganalyser successfully identifies the stego images in five different image formats, out of which four are lossless, even for a payload of 1 byte. Its performance is also compared with the existing steganalyser software tool.</p>
APA, Harvard, Vancouver, ISO, and other styles
24

Lackner, Arthur, Said Fathalla, Mojtaba Nayyeri, Andreas Behrend, Rainer Manthey, Sören Auer, Jens Lehmann, and Sahar Vahdati. "Analysing the evolution of computer science events leveraging a scholarly knowledge graph: a scientometrics study of top-ranked events in the past decade." Scientometrics 126, no. 9 (July 10, 2021): 8129–51. http://dx.doi.org/10.1007/s11192-021-04072-0.

Full text
Abstract:
AbstractThe publish or perish culture of scholarly communication results in quality and relevance to be are subordinate to quantity. Scientific events such as conferences play an important role in scholarly communication and knowledge exchange. Researchers in many fields, such as computer science, often need to search for events to publish their research results, establish connections for collaborations with other researchers and stay up to date with recent works. Researchers need to have a meta-research understanding of the quality of scientific events to publish in high-quality venues. However, there are many diverse and complex criteria to be explored for the evaluation of events. Thus, finding events with quality-related criteria becomes a time-consuming task for researchers and often results in an experience-based subjective evaluation. OpenResearch.org is a crowd-sourcing platform that provides features to explore previous and upcoming events of computer science, based on a knowledge graph. In this paper, we devise an ontology representing scientific events metadata. Furthermore, we introduce an analytical study of the evolution of Computer Science events leveraging the OpenResearch.org knowledge graph. We identify common characteristics of these events, formalize them, and combine them as a group of metrics. These metrics can be used by potential authors to identify high-quality events. On top of the improved ontology, we analyzed the metadata of renowned conferences in various computer science communities, such as VLDB, ISWC, ESWC, WIMS, and SEMANTiCS, in order to inspect their potential as event metrics.
APA, Harvard, Vancouver, ISO, and other styles
25

Dougan, Kirstin. "The “Black Box”: How Students Use a Single Search Box to Search for Music Materials." Information Technology and Libraries 37, no. 4 (December 17, 2018): 81–106. http://dx.doi.org/10.6017/ital.v37i4.10702.

Full text
Abstract:
Given the inherent challenges music materials present to systems and searchers (formats, title forms and languages, and the presence of additional metadata such as work numbers and keys), it is reasonable that those searching for music develop distinctive search habits compared to patrons in other subject areas. This study uses transaction log analysis of the music and performing arts module of a library’s federated discovery tool to determine how patrons search for music materials. It also makes a top-level comparison of searches done using other broadly defined subject disciplines’ modules in the same discovery tool. It seeks to determine, to the extent possible, whether users in each group have different search behaviors in this search environment. The study also looks more closely at searches in the music module to identify other search characteristics such as type of search conducted, use of advanced search techniques, and any other patterns of search behavior.
APA, Harvard, Vancouver, ISO, and other styles
26

Vaughan, Jason. "Investigations into Library Web-Scale Discovery Services." Information Technology and Libraries 31, no. 1 (March 1, 2008): 32. http://dx.doi.org/10.6017/ital.v31i1.1916.

Full text
Abstract:
Web-scale discovery services for libraries provide deep discovery to a library’s local and licensed content, and represent an evolution, perhaps a revolution, for end user information discovery as pertains to library collections. This article frames the topic of web-scale discovery, and begins by illuminating web-scale discovery from an academic library’s perspective – that is, the internal perspective seeking widespread staff participation in the discovery conversation. This included the creation of a discovery task force, a group which educated library staff, conducted internal staff surveys, and gathered observations from early adopters. The article next addresses the substantial research conducted with library vendors which have developed these services. Such work included drafting of multiple comprehensive question lists distributed to the vendors, onsite vendor visits, and continual tracking of service enhancements. Together, feedback gained from library staff, insights arrived at by the Discovery Task Force, and information gathered from vendors collectively informed the recommendation of a service for the UNLV Libraries.
APA, Harvard, Vancouver, ISO, and other styles
27

Shen, Kangning, Rongrong Tu, Rongju Yao, Sifeng Wang, and Ashish K. Luhach. "Decorative Art Pattern Mining and Discovery Based on Group User Intelligence." Journal of Organizational and End User Computing 33, no. 6 (November 2021): 1–12. http://dx.doi.org/10.4018/joeuc.20211101.oa20.

Full text
Abstract:
With the continuous developments of real estates and the increasing personalization of people, more and more house owners are willing to search for and discover their preferred decorative art patterns via various house decoration cases sharing websites or platforms. Through browsing and analyzing existing house decoration cases on the Web, a new house owner can find out his or her interested decorative art patterns; however, the above decorative art pattern mining and discovery process is often time-consuming and boring due to the big volume of existing house decoration cases on the Web. Therefore, it is becoming a challenging task to develop a time-efficient decorative art pattern mining and discovery method based on the available house decoration cases provided by historical users. Considering this challenge, a novel LSH-based similar house owners clustering approach is proposed. A set of experiments are designed to validate the effectiveness and efficiency of our proposal.
APA, Harvard, Vancouver, ISO, and other styles
28

Gesi, Antoinette T., Dominic W. Massaro, and Michael M. Cohen. "Discovery and Expository Methods in Teaching Visual Consonant and Word Identification." Journal of Speech, Language, and Hearing Research 35, no. 5 (October 1992): 1180–88. http://dx.doi.org/10.1044/jshr.3505.1180.

Full text
Abstract:
An experiment was conducted to examine the processes involved in lipreading as well as to investigate an optimal approach to teaching lipreading skill. We compared discovery and expository methods of learning to lip-read. Twenty-six college students with normal hearing were trained over 3 days to lip-read consonant-vowel (CV) syllables. The training material consisted of a prerecorded videotape of four different talkers. The task was a forced-choice procedure with feedback. Subjects learned with training, but there was no difference between the two learning methods. As a retention measure, subjects returned 4 weeks later and repeated the training. There were significant savings of the original learning. Three weeks after the retention phase, subjects were tested with a 10-item forced-choice monosyllabic word task. Those subjects who had extensive training on CV syllables did no better on identifying the monosyllabic words than did a control group of 9 subjects with no training. Nevertheless, performance for all three groups (discovery, expository, and no training) improved during training in the word identification task.
APA, Harvard, Vancouver, ISO, and other styles
29

Orrell, Alison J., Frank F. Eves, and Rich SW Masters. "Motor Learning of a Dynamic Balancing Task After Stroke: Implicit Implications for Stroke Rehabilitation." Physical Therapy 86, no. 3 (March 1, 2006): 369–80. http://dx.doi.org/10.1093/ptj/86.3.369.

Full text
Abstract:
Abstract Background and Purpose. After a stroke, people often attempt to consciously control their motor actions, which, paradoxically, disrupts optimal performance. A learning strategy that minimizes the accrual of explicit knowledge may circumvent attempts to consciously control motor actions, thereby resulting in better performance. The purpose of this study was to examine the implicit learning of a dynamic balancing task after stroke by use of 1 of 2 motor learning strategies: learning without errors and discovery learning. Participants and Methods. Ten adults with stroke and 12 older adults practiced a dynamic balancing task on a stabilometer under single-task (balance only) and concurrent-task conditions. Root-mean-square error (in degrees) from horizontal was used to measure balance performance. Results. The balance performance of the discovery (explicit) learners after stroke was impaired by the imposition of a concurrent cognitive task load. In contrast, the performance of the errorless (implicit) learners (stroke and control groups) and the discovery learning control group was not impaired. Discussion and Conclusion. The provision of explicit information during rehabilitation may be detrimental to the learning/relearning and execution of motor skills in some people with stroke. The application of implicit motor learning techniques in the rehabilitation setting may be beneficial. [Orrell AJ, Eves FF, Masters RSW. Motor learning of a dynamic balancing task after stroke: implicit implications for stroke rehabilitation. Phys Ther. 2006;86:369–380.]
APA, Harvard, Vancouver, ISO, and other styles
30

Natvig, Marit K., Shanshan Jiang, and Erlend Stav. "Using open data for digital innovation: Barriers for use and recommendations for publishers." JeDEM - eJournal of eDemocracy and Open Government 13, no. 2 (December 22, 2021): 28–57. http://dx.doi.org/10.29379/jedem.v13i2.666.

Full text
Abstract:
Open data from the public sector can fuel the innovation of digital products. This paper investigates barriers and success factors regarding use of open data in such innovations, and how public sector can increase the value of published data. A multimethod approach was used. An initial study identified aspects of relevance through interviews, a system development experiment, and a focus group. An in-depth study used the insight to perform interviews and a survey targeting innovators. Details on data needs, discovery, assessment, and use were found as well as barriers regarding use of open data in digital product innovations. Associated recommendations to data owners are provided regarding how they can increase the innovation capacity through appropriate licenses and service levels; convenient access mechanisms; publishing channels and infrastructures; transparency and dialogue; data, metadata, documentation, and APIs of high quality; harmonization and standardization.
APA, Harvard, Vancouver, ISO, and other styles
31

Wilkinson, Mark D., Ruben Verborgh, Luiz Olavo Bonino da Silva Santos, Tim Clark, Morris A. Swertz, Fleur D. L. Kelpin, Alasdair J. G. Gray, et al. "Interoperability and FAIRness through a novel combination of Web technologies." PeerJ Computer Science 3 (April 24, 2017): e110. http://dx.doi.org/10.7717/peerj-cs.110.

Full text
Abstract:
Data in the life sciences are extremely diverse and are stored in a broad spectrum of repositories ranging from those designed for particular data types (such as KEGG for pathway data or UniProt for protein data) to those that are general-purpose (such as FigShare, Zenodo, Dataverse or EUDAT). These data have widely different levels of sensitivity and security considerations. For example, clinical observations about genetic mutations in patients are highly sensitive, while observations of species diversity are generally not. The lack of uniformity in data models from one repository to another, and in the richness and availability of metadata descriptions, makes integration and analysis of these data a manual, time-consuming task with no scalability. Here we explore a set of resource-oriented Web design patterns for data discovery, accessibility, transformation, and integration that can be implemented by any general- or special-purpose repository as a means to assist users in finding and reusing their data holdings. We show that by using off-the-shelf technologies, interoperability can be achieved atthe level of an individual spreadsheet cell. We note that the behaviours of this architecture compare favourably to the desiderata defined by the FAIR Data Principles, and can therefore represent an exemplar implementation of those principles. The proposed interoperability design patterns may be used to improve discovery and integration of both new and legacy data, maximizing the utility of all scholarly outputs.
APA, Harvard, Vancouver, ISO, and other styles
32

OEHLMANN, RUEDIGER. "THE FUNCTION OF HARMONY AND TRUST IN COLLABORATIVE CHANCE DISCOVERY." New Mathematics and Natural Computation 02, no. 01 (March 2006): 69–83. http://dx.doi.org/10.1142/s1793005706000324.

Full text
Abstract:
Collaborative Chance Discovery aims at determining a rare event as a chance for future decision making from a set of potential chances that have been identified by computational methods. Typically based on these potential chances, the members of a workgroup will imagine scenarios that describe situations and event sequences during which a chance could be used. This paper describes a study of how scenarios may emerge from group interactions. Verbal protocols of two software design groups, who conducted the same design task, were analyzed. The experimental group additionally had to use social diagrams to externalize their changing views about the other group members. It was predicted that the externalization causes an increase of references to harmony and trust and that this increased awareness leads to improved scenarios. The protocol analysis confirmed this hypothesis and revealed details of the process of scenario emergence. The new insights gave rise to the proposal of a new model of scenario emergence based on externalizing social context, harmony and trust.
APA, Harvard, Vancouver, ISO, and other styles
33

Harris, Michael W., Jeff Lyon, Heather Fisher, Joshua Henry, and Sienna M. Wood. "Past, Present, and Future of the Collections of Cinema and Media Music Database." Journal of Film Music 10, no. 2 (December 16, 2022): 142–52. http://dx.doi.org/10.1558/jfm.20817.

Full text
Abstract:
For many scholars, critical analysis of film and media music is stymied by a lack of published or manuscript materials, and discoverability of such materials is many times hampered by how archival materials are cataloged. While numerous composers have deposited their papers at libraries and archives across the globe, discovery of these collections has occurred via citations in books and articles, or word-of-mouth between scholars. This is because archival collections are cataloged with a focus on the creator of the collection and not the individual object (such as a book or manuscript score). Therefore, if the score or other materials for a film are in a collection of a studio, or someone other than the composer themselves, they might be hard, if not impossible, to find without a lot of searching or a stroke of luck. Adding to the difficulty is that some of these collections are not fully indexed or searchable, so many materials remain hidden under a century of backlogged archival processing. In order to address this problem, the Collections of Cinema and Media Music (C2M2) has been designed, built, and populated by a small team spread across the United States. This paper will discuss the design and implantation of C2M2, with a focus on the task of creating a custom metadata schema that addresses the unique issues of film and media music, and the myriad of ways a researcher might try to access a particular score. It will also show the reader how the metadata functions and displays within the database, with discussions of the challenges inherent in a project of this scope. As research into film and media music continues to expand, scholars are clamoring for access to materials to expand their research beyond the realm of music–film relationships and into areas reliant on archival materials. In such a world, tools such as C2M2 will be critical in creating the ease of access that will eliminate the barriers that hamper such work.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Jiansheng, Xiaozhen Zhang, Hao Zheng, Qingqiu Lu, and Gang Fan. "Global Processing Styles Facilitate the Discovery of Structural Similarity." Psychological Reports 122, no. 5 (July 30, 2018): 1755–65. http://dx.doi.org/10.1177/0033294118787499.

Full text
Abstract:
This study examined whether global processing style facilitates the discovery of structural similarity. In the two experiments, the participants were presented with three stories after being primed with global or local processing through a Navon task. The first story was the base story, and the other two stories shared either surface similarity or structural similarity with the base story. The results showed that, compared with the participants of the local processing and control groups, a substantially greater number of participants of the global processing group selected the story with structural similarity to the base story. This finding indicated that the global processing style can facilitate the discovery of structural similarity.
APA, Harvard, Vancouver, ISO, and other styles
35

LACROIX, ZOÉ, LOUIQA RASCHID, and BARBARA A. ECKMAN. "TECHNIQUES FOR OPTIMIZATION OF QUERIES ON INTEGRATED BIOLOGICAL RESOURCES." Journal of Bioinformatics and Computational Biology 02, no. 02 (June 2004): 375–411. http://dx.doi.org/10.1142/s0219720004000648.

Full text
Abstract:
Today, scientific data are inevitably digitized, stored in a wide variety of formats, and are accessible over the Internet. Scientific discovery increasingly involves accessing multiple heterogeneous data sources, integrating the results of complex queries, and applying further analysis and visualization applications in order to collect datasets of interest. Building a scientific integration platform to support these critical tasks requires accessing and manipulating data extracted from flat files or databases, documents retrieved from the Web, as well as data that are locally materialized in warehouses or generated by software. The lack of efficiency of existing approaches can significantly affect the process with lengthy delays while accessing critical resources or with the failure of the system to report any results. Some queries take so much time to be answered that their results are returned via email, making their integration with other results a tedious task. This paper presents several issues that need to be addressed to provide seamless and efficient integration of biomolecular data. Identified challenges include: capturing and representing various domain specific computational capabilities supported by a source including sequence or text search engines and traditional query processing; developing a methodology to acquire and represent semantic knowledge and metadata about source contents, overlap in source contents, and access costs; developing cost and semantics based decision support tools to select sources and capabilities, and to generate efficient query evaluation plans.
APA, Harvard, Vancouver, ISO, and other styles
36

Asradi, Asradi, and Freddy Sarman. "Effectiveness of Discovery Learning Approaches integrated with Task Group Guidance to Increase Student Confidence in Guidance and Counseling." Bulletin of Social Studies and Community Development 1, no. 1 (2022): 01–09. http://dx.doi.org/10.61436/bsscd/v1i1.pp01-09.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Connor, Ryan, Rodney Brister, Jan Buchmann, Ward Deboutte, Rob Edwards, Joan Martí-Carreras, Mike Tisza, et al. "NCBI’s Virus Discovery Hackathon: Engaging Research Communities to Identify Cloud Infrastructure Requirements." Genes 10, no. 9 (September 16, 2019): 714. http://dx.doi.org/10.3390/genes10090714.

Full text
Abstract:
A wealth of viral data sits untapped in publicly available metagenomic data sets when it might be extracted to create a usable index for the virological research community. We hypothesized that work of this complexity and scale could be done in a hackathon setting. Ten teams comprised of over 40 participants from six countries, assembled to create a crowd-sourced set of analysis and processing pipelines for a complex biological data set in a three-day event on the San Diego State University campus starting 9 January 2019. Prior to the hackathon, 141,676 metagenomic data sets from the National Center for Biotechnology Information (NCBI) Sequence Read Archive (SRA) were pre-assembled into contiguous assemblies (contigs) by NCBI staff. During the hackathon, a subset consisting of 2953 SRA data sets (approximately 55 million contigs) was selected, which were further filtered for a minimal length of 1 kb. This resulted in 4.2 million (Mio) contigs, which were aligned using BLAST against all known virus genomes, phylogenetically clustered and assigned metadata. Out of the 4.2 Mio contigs, 360,000 contigs were labeled with domains and an additional subset containing 4400 contigs was screened for virus or virus-like genes. The work yielded valuable insights into both SRA data and the cloud infrastructure required to support such efforts, revealing analysis bottlenecks and possible workarounds thereof. Mainly: (i) Conservative assemblies of SRA data improves initial analysis steps; (ii) existing bioinformatic software with weak multithreading/multicore support can be elevated by wrapper scripts to use all cores within a computing node; (iii) redesigning existing bioinformatic algorithms for a cloud infrastructure to facilitate its use for a wider audience; and (iv) a cloud infrastructure allows a diverse group of researchers to collaborate effectively. The scientific findings will be extended during a follow-up event. Here, we present the applied workflows, initial results, and lessons learned from the hackathon.
APA, Harvard, Vancouver, ISO, and other styles
38

Lieu, Ryan, and Alberto Campagnolo. "Modelling Linked Data for Conservation." KULA: Knowledge Creation, Dissemination, and Preservation Studies 6, no. 3 (July 27, 2022): 1–8. http://dx.doi.org/10.18357/kula.232.

Full text
Abstract:
Conservation documentation serves an invaluable role in the history of cultural property, and conservators are bound by professional ethics to maintain accurate, clear, and permanent documentation about their work. Though many well-documented schemata exist for describing the holdings of memory organizations, none are designed to capture conservation documentation data in a semantically meaningful way. Conservation data often includes deeply detailed observations about the physical structure, materiality, and condition state of an object and how these characteristics change over time. When included with descriptive catalog metadata, these conservation data points typically manifest in seldom-used fields as free-text notes written with inconsistently applied standards and uncontrolled vocabularies. Beyond the traditional scope of descriptive metadata, conservation treatment documentation includes event-oriented data that captures a sequence of steps taken by the conservator, the addition and removal of material, and cause-and-effect relationships between observed conditions and treatment decisions made by a conservator. In 2020, the Linked Conservation Data Consortium conducted a pilot project to transform unstructured conservation data into linked data. Participants examined potential models in the library field and ultimately chose to conform to the Comité International pour la Documentation (CIDOC) Conceptual Reference Model (CRM) for its accommodation of event-oriented data and detailed descriptive attribution. Project technologists worked with real report data from four institutions to create XML data models and map newly structured data to the CRM. The pilot group then imported CRM-modelled datasets into a discovery environment, developed queries to reconcile the divergent datasets, and created knowledge maps and charts in response to a small set of predetermined research questions. Feedback from conservators attending workshop activities revealed a shared need for conservation data standards and guidelines for those developing documentation templates and databases. Project outcomes signalled the necessity of further developing conservation vocabularies and ontologies to link datasets between institutions and from adjacent domains.
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Siqi, and Ping Gao. "Lexical Alignment Effect of the Continuation Task on Interpreting Trainees’ English-to-Chinese Sight Translation Fluency." International Journal of Linguistics, Literature and Translation 6, no. 10 (October 20, 2023): 131–40. http://dx.doi.org/10.32996/ijllt.2023.6.10.16.

Full text
Abstract:
The present study explored the alignment effect of the sight translation continuation task (STCT) and its possible influence on sight translation (ST) fluency. Thirty-four third-year English Education majors at a Chinese university were divided into two groups in the study. The experimental group who conducted the STCT read the English source text of a speech and its translation in Chinese, while the control group only read the English source text. Afterwards, both groups sight translated the continued source text into Chinese. The results indicated that (a) the experimental group aligned with the pre-reading text at the lexical level and (b) the continuation task improved ST fluency to some extent as the experimental group produced significantly fewer self-repairs in their ST products. The study concludes by suggesting that the continuation task can be useful in ST instruction and, hence, should be more visible in the interpreting classroom.
APA, Harvard, Vancouver, ISO, and other styles
40

Tuomchomtam, Sarach, and Nuanwan Soonthornphisaj. "Demographics and Personality Discovery on Social Media: A Machine Learning Approach." Information 12, no. 9 (August 30, 2021): 353. http://dx.doi.org/10.3390/info12090353.

Full text
Abstract:
This research proposes a new feature extraction algorithm using aggregated user engagements on social media in order to achieve demographics and personality discovery tasks. Our proposed framework can discover seven essential attributes, including gender identity, age group, residential area, education level, political affiliation, religious belief, and personality type. Multiple feature sets are developed, including comment text, community activity, and hybrid features. Various machine learning algorithms are explored, such as support vector machines, random forest, multi-layer perceptron, and naïve Bayes. An empirical analysis is performed on various aspects, including correctness, robustness, training time, and the class imbalance problem. We obtained the highest prediction performance by using our proposed feature extraction algorithm. The result on personality type prediction was 87.18%. For the demographic attribute prediction task, our feature sets also outperformed the baseline at 98.1% for residential area, 94.7% for education level, 92.1% for gender identity, 91.5% for political affiliation, 60.6% for religious belief, and 52.0% for the age group. Moreover, this paper provides the guideline for the choice of classifiers with appropriate feature sets.
APA, Harvard, Vancouver, ISO, and other styles
41

Rasmussen, Karsten Boye. "Metadata is key - the most important data after data." IASSIST Quarterly 42, no. 2 (July 18, 2018): 1. http://dx.doi.org/10.29173/iq922.

Full text
Abstract:
Welcome to the second issue of volume 42 of the IASSIST Quarterly (IQ 42:2, 2018). The IASSIST Quarterly has had several papers on many different aspects of the Data Documentation Initiative - for a long time better known by its acronym DDI, without any further explanation. DDI is a brand. The IASSIST Quarterly has also included special issues of collections of papers concerning DDI. Among staff at data archives and data libraries, as well as the users of these facilities, I think we can agree that it is the data that comes first. However, fundamental to all uses of data is the documentation describing the data, without which the data are useless. Therefore, it comes as no surprise that the IASSIST Quarterly is devoted partly to the presentation of papers related to documentation. The question of documentation or data resembles the question of the chicken or the egg. Don't mistake the keys for your car. The metadata and the data belong together and should not be separated. DDI now is a standard, but as with other standards it continues to evolve. The argument about why standards are good comes to mind: 'The nice thing about standards is that you have so many to choose from!'. DDI is the de facto standard for most social science data at data archives and university data libraries. The first paper demonstrates a way to tackle the heterogeneous character of the usage of the DDI. The approach is able to support collaborative questionnaire development as well as export in several formats including the metadata as DDI. The second paper shows how an institutionalized and more general metadata standard - in this case the Belgian Encoded Archival Description (EAD) - is supported by a developed crosswalk from DDI to EAD. However, IQ 42:2 is not a DDI special issue, and the third paper presents an open-source research data management platform called Dendro and a laboratory notebook called LabTablet without mentioning DDI. However, the paper certainly does mention metadata - it is the key to all data. The winner of the paper competition of the IASSIST 2017 conference is presented in this issue. 'Flexible DDI Storage' is authored by Oliver Hopt, Claus-Peter Klas, Alexander Mühlbauer, all affiliated with GESIS - the Leibniz-Institute for the Social Sciences in Germany. The authors argue that the current usage of DDI is heterogeneous and that this results in complex database models for each developed application. The paper shows a new binding of DDI to applications that works independently of most version changes and interpretative differences, thus avoiding continuous reimplementation. The work is based upon their developed DDI-FlatDB approach, which they showed at the European DDI conferences in 2015 and 2016, and which is also described in the paper. Furthermore, a web-based questionnaire editor and application supports large DDI structures and collaborative questionnaire development as well as production of structured metadata for survey institutes and data archives. The paper describes the questionnaire workflow from the start to the export of questionnaire, DDI XML, and SPSS. The development is continuing and it will be published as open source. The second paper is also focused on DDI, now in relation to a new data archive. 'Elaborating a Crosswalk Between Data Documentation Initiative (DDI) and Encoded Archival Description (EAD) for an Emerging Data Archive Service Provider' is by Benjamin Peuch who is a researcher at the State Archives of Belgium. It is expected that the future Belgian data archive will be part of the State Archives, and because DDI is the most widespread metadata standard in the social sciences, the State Archives have developed a DDI-to-EAD crosswalk in order to re-use their EAD infrastructure. The paper shows the conceptual differences between DDI and EAD - both XML based - and how these can be reconciled or avoided for the purpose of a data archive for the social sciences. The author also foresees a fruitful collaboration between traditional archivists and social scientists. The third paper is by a group of scholars connected to the Informatics Engineering Department of University of Porto and the INESC TEC in Portugal. Cristina Ribeiro, João Rocha da Silva, João Aguiar Castro, Ricardo Carvalho Amorim, João Correia Lopes, and Gabriel David are the authors of 'Research Data Management Tools and Workflows: Experimental Work at the University of Porto'. The authors start with the statement that 'Research datasets include all kinds of objects, from web pages to sensor data, and originate in every domain'. The task is to make these data visible, described, preserved, and searchable. The focus is on data preparation, dataset organization and metadata creation. Some groups were proposed a developed open-source research data management platform called Dendro and a laboratory notebook called LabTablet, while other groups that demanded a domain-specific approach had special developed models and applications. All development and metadata modelling have in sight the metadata dissemination. Submissions of papers for the IASSIST Quarterly are always very welcome. We welcome input from IASSIST conferences or other conferences and workshops, from local presentations or papers especially written for the IQ. When you are preparing such a presentation, give a thought to turning your one-time presentation into a lasting contribution. Doing that after the event also gives you the opportunity of improving your work after feedback. We encourage you to login or create an author login to https://www.iassistquarterly.com (our Open Journal System application). We permit authors 'deep links' into the IQ as well as deposition of the paper in your local repository. Chairing a conference session with the purpose of aggregating and integrating papers for a special issue IQ is also much appreciated as the information reaches many more people than the limited number of session participants and will be readily available on the IASSIST Quarterly website at https://www.iassistquarterly.com. Authors are very welcome to take a look at the instructions and layout: https://www.iassistquarterly.com/index.php/iassist/about/submissions Authors can also contact me directly via e-mail: kbr@sam.sdu.dk. Should you be interested in compiling a special issue for the IQ as guest editor(s) I will also be delighted to hear from you. Karsten Boye Rasmussen - June, 2018
APA, Harvard, Vancouver, ISO, and other styles
42

Beck, Wolfgang, Tracy Rose, Matthew Milowsky, William Kim, Jeff Klomp, and Benjamin Vincent. "662 Statistical learning from clinical and immunogenomic variables to predict response and survival with PD-L1 inhibition in advanced urothelial cancer." Journal for ImmunoTherapy of Cancer 8, Suppl 3 (November 2020): A699. http://dx.doi.org/10.1136/jitc-2020-sitc2020.0662.

Full text
Abstract:
BackgroundUrothelial cancer patients treated with immune checkpoint inhibitor (ICI) therapy have varied response and survival.1 Clinical and immunogenomic biomarkers could help predict ICI response and survival to inform decisions about patient selection for ICI treatment.MethodsThe association of clinical metadata and immunogenomic signatures with response and survival was analyzed in a set of 347 urothelial cancer patients treated with the PD-L1 inhibitor atezolizumab as part of the IMVigor210 study.1 Data were divided into a discovery set (2/3 of patients) and validation set (1/3 of patients). We analyzed as potential predictors 70 total variables, of which 16 were clinical metadata and 54 were immunogenomic signatures. Categorical variables were converted to dummy variables (89 total variables: 35 clinical, 54 immunogenomic). Using the discovery set, elastic net regression with Monte Carlo cross-validation was used to build optimal models for response (logistic regression) and survival (Cox proportional-hazards). Model performance was evaluated using the validation set.ResultsIn the optimal model of response, 17 variables (10 clinical, 7 immunogenomic) were selected as informative predictors, including Baseline Eastern Cooperative Oncology Group (ECOG) Score = 0, Neoantigen Burden, Lymph Node Metastases, and Tumor Mutation Burden (figure 1). The final model predicted patient response with good performance (Area Under Curve = 0.828, pAUC = 2.38e-3; True Negative Rate = 91.7%, True Positive Rate = 87.5%, pconfusion matrix = 0.0252). In the optimal model of survival, 32 variables (17 clinical, 15 immunogenomic) were selected as informative predictors, including baseline ECOG Score = 0, IC Level 2+, Race = Asian, and Consensus Tumor Subtype = Neuroendocrine (figure 2). The final model predicted patient survival with good performance (c-indexmodel = 0.652, pc-index = 0.0290).Abstract 662 Figure 1Elastic Net Logistic Regression with Monte Carlo Cross-Validation to Predict Response to Atezolizumab in Urothelial Cancer. (A) Predictive variables with beta coefficient 95% confidence intervals that exclude 0, derived from Monte Carlo cross-validation. (B) Confusion matrix of actual vs. predicted response data in the validation set. (C) Total response proportions of actual and predicted response data in the validation setAbstract 662 Figure 2Elastic Net Cox Proportional-Hazards Regression with Monte Carlo Cross-Validation to Predict Survival. (A) Predictor variables with beta coefficient 95% confidence intervals that exclude 0, derived from Monte Carlo cross-validation. (B) Predictions vs. survival outcomes in the validation set. (C) Loess models of density curves for survival outcomes in the validation set. 95% confidence intervals were generated through bootstrapping with replacement. (D) Loess fit of predictions vs. survival outcomes in the validation set. 95% confidence interval indicates strength of fitConclusionsModels incorporating clinical metadata and immunogenomic signatures can predict response and survival for urothelial cancer patients treated with atezolizumab. Among predictors in those models, baseline performance status is the greatest and most positive predictor of response and survival.ReferenceMariathasan S, Turley S, Nickles D, et al. TGFβ attenuates tumour response to PD-L1 blockade by contributing to exclusion of T cells. Nature 2018;554:544–548.
APA, Harvard, Vancouver, ISO, and other styles
43

Coulter, D. A., D. O. Jones, P. McGill, R. J. Foley, P. D. Aleo, M. J. Bustamante-Rosell, D. Chatterjee, et al. "YSE-PZ: A Transient Survey Management Platform that Empowers the Human-in-the-loop." Publications of the Astronomical Society of the Pacific 135, no. 1048 (June 1, 2023): 064501. http://dx.doi.org/10.1088/1538-3873/acd662.

Full text
Abstract:
Abstract The modern study of astrophysical transients has been transformed by an exponentially growing volume of data. Within the last decade, the transient discovery rate has increased by a factor of ∼20, with associated survey data, archival data, and metadata also increasing with the number of discoveries. To manage the data at this increased rate, we require new tools. Here we present YSE-PZ, a transient survey management platform that ingests multiple live streams of transient discovery alerts, identifies the host galaxies of those transients, downloads coincident archival data, and retrieves photometry and spectra from ongoing surveys. YSE-PZ also presents a user with a range of tools to make and support timely and informed transient follow-up decisions. Those subsequent observations enhance transient science and can reveal physics only accessible with rapid follow-up observations. Rather than automating out human interaction, YSE-PZ focuses on accelerating and enhancing human decision making, a role we describe as empowering the human-in-the-loop. Finally, YSE-PZ is built to be flexibly used and deployed; YSE-PZ can support multiple, simultaneous, and independent transient collaborations through group-level data permissions, allowing a user to view the data associated with the union of all groups in which they are a member. YSE-PZ can be used as a local instance installed via Docker or deployed as a service hosted in the cloud. We provide YSE-PZ as an open-source tool for the community.
APA, Harvard, Vancouver, ISO, and other styles
44

Pan, Yingying, and Hoisoo Kim. "Effects of Scaffolding Type and Metacognition Level on Higher-order Thinking Skills and Task Performance in Mathematical Learning: Lens of Neo-Vygotskian Theoretical Learning." SNU Journal of Education Research 31, no. 4 (December 31, 2022): 1–21. http://dx.doi.org/10.54346/sjer.2022.31.4.1.

Full text
Abstract:
Neo-Vygotskian theoretical learning was developed based on Vygotsky’s notions of scientific knowledge as psychological tools and teaching these tools as major instructional content. Vygotsky’s adherents challenged the validity of the theoretical assumptions of guided discovery learning and argued that this scaffolding strategy was probably not a proper way to teach students scientific knowledge. Therefore, this paper investigated the effects of these different scaffolds and learners’ metacognition on their higher-order thinking skills and task performance in mathematical learning. Eighty-four eighth grade students participated in our study. The learning sessions consisting of eight 40-minute lessons were conducted by a single teacher. The results showed that the theoretical learning group developed significantly better higher-order thinking skills and performed better than did the guided discovery learning group or the didactic learning group. Moreover, our study did not find a significant interaction between scaffolding type and metacognition, which indicated that the theoretical learning scaffold was more effective than were the other scaffolds regardless of the levels of metacognition.
APA, Harvard, Vancouver, ISO, and other styles
45

Vela, Vjosa. "Exploring the Impact of Task-Based Activities on Vocabulary Acquisition and Student Attitudes Towards Reading Short Stories: A Comparison of Two Approaches." SEEU Review 18, no. 1 (June 1, 2023): 19–36. http://dx.doi.org/10.2478/seeur-2023-0008.

Full text
Abstract:
Abstract This study investigates the effectiveness of integrating short stories with task-based learning activities in English as a foreign language (EFL) class to promote vocabulary development and motivation among L2 learners. Six short stories were selected by the participants based on their interests, pre- and post-tests were conducted to evaluate vocabulary acquisition and a questionnaire was used to gather information about the perception of task-based activities after reading short stories among students. The study involved 60 intermediate level English students at the SEEU Language Center, assigned to either the control or experimental group. The experimental group completed post-reading tasks such as keeping vocabulary notebooks, reading circle discussions, sequencing activities, plot structure understanding, and group poster presentations after reading each story. The findings suggest that the incorporating engaging reading activities had a favorable effect on language learning. The experimental group exhibited greater vocabulary acquisition and comprehension than the control group. The follow-up tasks created a sense of achievement, improved communication and interaction among the students. According to the study, including comprehensible input through extensive reading along with constructive output from task-oriented exercises can effectively promote language progress and enhance students’ motivation thus effectively facilitating language development in L2 learners.
APA, Harvard, Vancouver, ISO, and other styles
46

Sesagiri Raamkumar, Aravind, Schubert Foo, and Natalie Pang. "Can I have more of these please?" Electronic Library 36, no. 3 (June 4, 2018): 568–87. http://dx.doi.org/10.1108/el-04-2017-0077.

Full text
Abstract:
Purpose During the literature review phase, the task of finding similar research papers can be a difficult proposition for researchers due to the procedural complexity of the task. Current systems and approaches help in finding similar papers for a given paper, even though researchers tend to additionally search using a set of papers. This paper aims to focus on conceptualizing and developing recommendation techniques for key literature review and manuscript preparatory tasks that are interconnected. In this paper, the user evaluation results of the task where seed basket-based discovery of papers is performed are presented. Design/methodology/approach A user evaluation study was conducted on a corpus of papers extracted from the ACM Digital Library. Participants in the study included 121 researchers who had experience in authoring research papers. Participants, split into students and staff groups, had to select one of the provided 43 topics and run the tasks offered by the developed assistive system. A questionnaire was provided at the end of each task for evaluating the task performance. Findings The results show that the student group evaluated the task more favourably than the staff group, even though the difference was statistically significant for only 5 of the 16 measures. The measures topical relevance, interdisciplinarity, familiarity and usefulness were found to be significant predictors for user satisfaction in this task. A majority of the participants, who explicitly stated the need for assistance in finding similar papers, were satisfied with the recommended papers in the study. Originality/value The current research helps in bridging the gap between novices and experts in terms of literature review skills. The hybrid recommendation technique evaluated in this study highlights the effectiveness of combining the results of different approaches in finding similar papers.
APA, Harvard, Vancouver, ISO, and other styles
47

Pando, Francisco. "Comparison of species information TDWG standards from the point of view of the Plinian Core specification." Biodiversity Information Science and Standards 2 (May 17, 2018): e25869. http://dx.doi.org/10.3897/biss.2.25869.

Full text
Abstract:
Species level information, as an important component of the biodiversity information landscape, is an area where some TDWG standards and activities, coincide. Plinian Core (Plinian Core Task Group 2018) is a generalistic specification that covers aspects such species descriptions and nomenclature, as well as many others (legal, conservation, management, etc.). While the Plinian Core non-biological terms have no counterpart in the TDWG developments, some of its biological ones have, and that is the focus of this work. First, it must be noticed that Plinian Core relies on some TDWG standards for specific facets of species information: Standard: Darwin Core (Darwin Core maintenance group, Biodiversity Information Standards (TDWG) 2014) Elements: taxonConceptID, Hierarchy, MeasurementOrFact, ResourceRelationShip. Standard:Ecological Metadata Language (EML project members 2011) Elements: associatedParty, keywordSet, coverage, dataset Standard:Encyclopedia of Life Schema (EOL Team 2012) Elements: AncillaryData: DataObjectBase Standard:Global Invasive Species Network (GISIN 2008) Elements: origin, presence, persistence, distribution, harmful, modified, startValidDate, endValidDate, countryCode, stateProvince, county, localityName, county, language, citation, abundance... Standard:Taxon Concept Schema. TCS (Taxonomic Names and Concepts interest group 2006) Elements: scientificName Given the direct dependency of Plinian Core for these terms, they do not pose any compatibility or interoperability problem. However, biological descriptions --especially structured ones-- are the object of DELTA (Dallwitz 2006) and the Structured Descriptive Data (SDD) (Hagedorn et al. 2005), and also covered by Plinian Core. This convergence presents overlaps, mismatches and nuances, which discussion is the core of this work. Using some species descriptions as a test case, and transforming them between these standards (Plinian Core, DELTA, and SDD), the strengths and compatibility issues of these specifications are evaluated and discussed. Some operational aspects of Plinian Core in relation to GBIF's IPT (GBIF Secretariat 2016) and the INSPIRE directive (European Commission 2007) are also reviewed.
APA, Harvard, Vancouver, ISO, and other styles
48

Figueroa, Alejandro, Billy Peralta, and Orietta Nicolis. "Coming to Grips with Age Prediction on Imbalanced Multimodal Community Question Answering Data." Information 12, no. 2 (January 21, 2021): 48. http://dx.doi.org/10.3390/info12020048.

Full text
Abstract:
For almost every online service, it is fundamental to understand patterns, differences and trends revealed by age demographic analysis—for example, take the discovery of malicious activity, including identity theft, violation of community guidelines and fake profiles. In the particular case of platforms such as Facebook, Twitter and Yahoo! Answers, user demographics have impacts on their revenues and user experience; demographics assist in ensuring that the needs of each cohort are fulfilled via personalizing and contextualizing content. Despite the fact that technology has been made more accessible, thereby becoming evermore prevalent in both personal and professional lives alike, older people continue to trail Gen Z and Millennials in its adoption. This trailing brings about an under-representation that has a harmful influence on the demographic analysis and on supervised machine learning models. To that end, this paper pioneers attempts at examining this and other major challenges facing three distinct modalities when dealing with community question answering (cQA) platforms (i.e., texts, images and metadata). As for textual inputs, we propose an age-batched greedy curriculum learning (AGCL) approach to lessen the effects of their inherent class imbalances. When built on top of FastText shallow neural networks, AGCL achieved an increase of ca. 4% in macro-F1-score with respect to baseline systems (i.e., off-the-shelf deep neural networks). With regard to metadata, our experiments show that random forest classifiers significantly improve their performance when individuals close to generational borders are excluded (up to 20% more accuracy); and by experimenting with neural network-based visual classifiers, we discovered that images are the most challenging modality for age prediction. In fact, it is hard for a visual inspection to connect profile pictures with age cohorts, and there are considerable differences in their group distributions with respect to meta-data and textual inputs. All in all, we envisage that our findings will be highly relevant as guidelines for constructing assorted multimodal supervised models for automatic age recognition across cQA platforms.
APA, Harvard, Vancouver, ISO, and other styles
49

Dyer, Adam, Isabelle Killane, Nollaig Bourke, Conor Woods, James Gibney, Desmond O'Neill, Richard Reilly, and Sean Kennelly. "84 Does Dual-Task Gait Speed Predict Cognitive Performance in Midlife Type 2 Diabetes? Baseline Results from the ENBIND Study." Age and Ageing 48, Supplement_3 (September 2019): iii1—iii16. http://dx.doi.org/10.1093/ageing/afz102.19.

Full text
Abstract:
Abstract Background Type 2 Diabetes (T2DM) in midlife is associated with a greater risk of dementia in later life. The longitudinal ENBIND Study is examining novel approaches to biomarker discovery in this high-risk group which may help identify those at greatest risk Methods Non-demented participants with midlife T2DM (no micro/macrovascular complications) and matched controls were recruited. Following detailed health/diabetes assessment, general cognitive (MoCA) and computerised neuropsychological (CANTAB) assessment were performed. Gait was assessed by stopwatch and accelerometers across several tasks including self-selected and maximal gait speed in addition to a dual-task cognitive paradigm (reciting alternate letters of the alphabet). Bloods were analysed for C-Reactive Protein (CRP) and glycated haemoglobin (HbA1c). Between group differences were analysed using t-tests/non-parametric equivalents and linear regression used for multivariate analysis. Results Sixty participants with T2DM (51.9 +/- 8.4 yrs) and 30 matched controls (52.3 +/- 7.9 yrs) were recruited. Controlling for demographic and cardiovascular covariates, T2DM was associated with a lower MoCA score, slower self-selected, maximal and dual-task gait speed (all p<0.05). Maximal gait speed (p =0.006) but not self-selected gait speed (p =0.47) was associated with poorer cognitive function. On multivariate analysis of the dual-task difference, both T2DM and lower MoCA score were associated with a poorer performance, (p<0.001, p= 0.003). Overall, performance in the lowest vs highest quartile on the dual-task gait paradigm was associated with a significantly poorer performance on the MoCA (p<0.001; median 27 vs 29). On multivariate analysis of laboratory parameters, higher CRP levels were associated with slower maximal (p =0.041) and dual-task (p=0.033) gait performance. Conclusion Midlife T2DM is associated with poorer cognitive performance. Gait speed, and in particular dual-task gait speed, correlate strongly with general cognitive performance. Future work will tease out the specific domains of gait and cognition which are affected, and assess longitudinally in this high-risk group.
APA, Harvard, Vancouver, ISO, and other styles
50

Wong, Sandra. "Database Discovery: From a Migration Project to a Content Strategy." Library Resources & Technical Services 64, no. 2 (May 8, 2020): 72. http://dx.doi.org/10.5860/lrts.64n2.72.

Full text
Abstract:
After migrating to Ex Libris’s Alma and Primo for its integrated library system (ILS) and discovery layer, library staff at Simon Fraser University (SFU) maintained duplicate database information in a locally developed electronic resources management (ERM) system known as the CUFTS ERM for fifteen months. The CUFTS ERM provided the data for the library’s public-facing database list known as the CUFTS resource database (CRDB). A database search function had been on Ex Libris’s Primo roadmap for product development and was announced six months after the library went live with Alma and Primo. However, the new Primo database search function lacked the ability to replace the CRDB. Members of the library’s ILS Steering Committee who managed Alma and Primo were concerned about significant negative impacts on end-users if the library adopted Primo to replace the CRDB. The steering committee formed a task group to investigate options for creating a database list from Alma records to reduce duplication of staff time and effort, and systems resources, and to replicate the main functions of the existing CRDB for end-user discovery and access.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography