To see the other types of publications on this topic, follow the link: Transactional Records Access Clearinghouse.

Journal articles on the topic 'Transactional Records Access Clearinghouse'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 journal articles for your research on the topic 'Transactional Records Access Clearinghouse.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kirksey, J. Jacob, Carolyn Sattin-Bajaj, Michael A. Gottfried, Jennifer Freeman, and Christopher S. Ozuna. "Deportations Near the Schoolyard: Examining Immigration Enforcement and Racial/Ethnic Gaps in Educational Outcomes." AERA Open 6, no. 1 (January 2020): 233285841989907. http://dx.doi.org/10.1177/2332858419899074.

Full text
Abstract:
With increased tensions and political rhetoric surrounding immigration enforcement in the United States, schools are facing greater challenges in ensuring support for their students of immigrant and Latino/a origin. This study examined the associations between deportations near school districts and racial/ethnic gaps in educational outcomes in school districts across the country. With data from the Stanford Educational Data Archive, the Civil Rights Data Collection, and the Transactional Records Access Clearinghouse, this study used longitudinal, cross-sectional analyses and found that in the years when districts had more deportations occurring within 25 miles, White-Latino/a gaps were larger in math achievement and rates of chronic absenteeism. No associations were found for gaps in English language arts achievement or rates of bullying. Implications for researchers, policymakers, and school leaders are discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Skatova, Anya, Kate Shiells, and Andy Boyd. "Attitudes towards transactional data donation and linkage in a longitudinal population study: evidence from the Avon Longitudinal Study of Parents and Children." Wellcome Open Research 4 (June 9, 2021): 192. http://dx.doi.org/10.12688/wellcomeopenres.15557.2.

Full text
Abstract:
Background: Commercial transaction records, such as data collected through banking and retail loyalty cards, present a novel opportunity for longitudinal population studies to capture data on participants’ real-world behaviours and interactions. However, little is known about participant attitudes towards donating transactional records for this purpose. This study aimed to: (i) explore the attitudes of longitudinal population study participants towards sharing their transactional records for health research and data linkage; and (ii) explore the safeguards that researchers should consider implementing when looking to request transactional data from participants for data linkage studies. Methods: Participants in the Avon Longitudinal Study of Parents and Children were invited to a series of three focus groups with semi-structured discussions designed to elicit opinions. Through asking participants to attend three focus groups we aimed to facilitate more in-depth discussions around the potentially complex topic of data donation and linkage. Thematic analysis was used to sort data into overarching themes addressing the research questions. Results: Participants (n= 20) expressed a variety of attitudes towards data linkage, which were associated with safeguards to address concerns. This data was sorted into three themes: understanding, trust, and control. We discuss the importance of explaining the purpose of data linkage, consent options, who the data is linked with and sensitivities associated with different parts of transactional data. We describe options for providing further information and controls that participants consider should be available when studies request access to transactional records. Conclusions: This study provides initial evidence on the attitudes and concerns of participants of a longitudinal cohort study towards transactional record linkage. The findings suggest a number of safeguards which researchers should consider when looking to recruit participants for similar studies, such as the importance of ensuring participants have access to appropriate information, control over their data, and trust in the organisation.
APA, Harvard, Vancouver, ISO, and other styles
3

Skatova, Anya, Kate Shiells, and Andy Boyd. "Attitudes towards transactional data donation and linkage in a longitudinal population study: evidence from the Avon Longitudinal Study of Parents and Children." Wellcome Open Research 4 (December 3, 2019): 192. http://dx.doi.org/10.12688/wellcomeopenres.15557.1.

Full text
Abstract:
Background: Commercial transaction records, such as data collected through banking and retail loyalty cards, present a novel opportunity for longitudinal population studies to capture data on participants’ real-world behaviours and interactions. However, little is known about participant attitudes towards donating transactional records for this purpose. This study aimed to: (i) explore the attitudes of longitudinal population study participants towards sharing their transactional records for health research and data linkage; and (ii) explore the safeguards that researchers should consider implementing when looking to request transactional data from participants for data linkage studies. Methods: Participants in the Avon Longitudinal Study of Parents and Children were invited to a series of three focus groups with semi-structured discussions designed to elicit opinions. Through asking participants to attend three focus groups we aimed to facilitate more in-depth discussions around the potentially complex topic of data donation and linkage. Thematic analysis was used to sort data into overarching themes addressing the research questions. Results: Participants (n= 20) expressed a variety of attitudes towards data linkage, which were associated with safeguards to address concerns. This data was sorted into three themes: information, trust, and control. We discuss the importance of explaining the purpose of data linkage, consent options, who the data is linked with and sensitivities associated with different parts of transactional data. We describe options for providing further information and controls that participants consider should be available when studies request access to transactional records. Conclusions: This study provides initial evidence on the attitudes and concerns of participants of a longitudinal cohort study towards transactional record linkage. The findings suggest a number of safeguards which researchers should consider when looking to recruit participants for similar studies, such as the importance of ensuring participants have access to appropriate information, control over their data, and trust in the organisation.
APA, Harvard, Vancouver, ISO, and other styles
4

Sonkamble, Rahul Ganpatrao, Anupkumar M. Bongale, Shraddha Phansalkar, Abhishek Sharma, and Shailendra Rajput. "Secure Data Transmission of Electronic Health Records Using Blockchain Technology." Electronics 12, no. 4 (February 17, 2023): 1015. http://dx.doi.org/10.3390/electronics12041015.

Full text
Abstract:
Electronic Health Records (EHR) serve as a solid documentation of health transactions and as a vital resource of information for healthcare stakeholders. EHR integrity and security issues, however, continue to be intractable. Blockchain-based EHR architectures, however, address the issues of integrity very effectively. In this work, we suggest a decentralized patient-centered healthcare data management (PCHDM) with a blockchain-based EHR framework to address issues of confidentiality, access control, and privacy of record. This patient-centric architecture keeps the patient at the center of control for secured storage of EHR data. It is effective in the storage environment with the interplanetary file system (IPFS) and blockchain technology. In order to control unauthorized users, the proposed secure password authentication-based key exchange (SPAKE) implements smart contract-based access control to EHR transactions and access policies. The experimental setup comprises four hyperledger fabric nodes with level DB database and IPFS off-chain storage. The framework was evaluated using the public hepatitis dataset, with parameters such as block creation time, transactional computational overhead with encryption key size, and uploading/downloading time with EHR size. The framework enables patient-centric access control of the EHR with the SPAKE encryption algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Mandal, Ajaya, Prakriti Dumaru, Sagar Bhandari, Shreeti Shrestha, and Subarna Shakya. "Decentralized Electronic Health Record System." Journal of the Institute of Engineering 15, no. 1 (February 16, 2020): 77–80. http://dx.doi.org/10.3126/jie.v15i1.27716.

Full text
Abstract:
With a view to overcome the shortcomings of traditional Electronic Health Record (EHR) system so as to assure the interoperability by providing open access to sensitive health data, while still preserving personal data privacy, anonymity and avoiding data misuse, Decentralized Electronic Health Record System was developed. The aforementioned issue concerning traditional EHR system can be addressed by implication of emerging technology of the era namely Block chain, together with Inter Planetary File System (IPFS) which enables data sharing in decentralized and transactional fashion, thereby maintaining delicate balance between privacy and accessibility of electronic health records. A block chain based EHR system has been built for secure, efficient and interoperable access to medical records by both patients and doctors while preserving privacy of the sensitive patient’s information. Patients can easily and comprehensively access to their medical records across providers and treatment sites using unique properties of block chain and decentralized storage. A separate portal for both the patients and doctors has been built enabling the smart contracts to handle further interaction between doctors and patients. So, in this system, it is demonstrated how principles of decentralization and block chain architectures could contribute to EHR system using Ethereum smart contracts and IPFS to orchestrate a suitable system governing the medical record access while providing patients with comprehensive record review along with consideration for audit ability and data sharing.
APA, Harvard, Vancouver, ISO, and other styles
6

Mehrad, Aida, Jordi Fernández-Castro, and Maria Pau González Gómez de Olmedo. "A systematic review of leadership styles, work engagement and organizational support." International Journal of Research in Business and Social Science (2147- 4478) 9, no. 4 (July 3, 2020): 66–77. http://dx.doi.org/10.20525/ijrbs.v9i4.735.

Full text
Abstract:
Work engagement is one of the critical factors at an organization, so considering some factors such as leadership styles and organizational support is important. Lack of attention to these factors can lead to undesirable environments for workers. The purpose of this study is to conduct a systematic review based on these variables. Data for this research were gathered from databases of Web of Knowledge, psycarticles, Scopus, psycinfo, Web of Science, and Google Scholar. A total of 165 records were identified in databases. 15 records were discovered in other sources. 149 records remained after deleting duplicates. 117 of these records were examined, 52 registers excluded. 65 complete articles were chosen to be evaluated, and after 10 completed articles had been excluded, 55 studies ultimately remained for inclusion in the synthesis. Overall, leadership styles (transformational leadership and transactional leadership) and organizational support were found as two imperative organizational factors to access better outcomes at the workplace.
APA, Harvard, Vancouver, ISO, and other styles
7

Chioma Susan Nwaimo, Ayodeji Enoch Adegbola, and Mayokun Daniel Adegbola. "Predictive analytics for financial inclusion: Using machine learning to improve credit access for under banked populations." Computer Science & IT Research Journal 5, no. 6 (June 7, 2024): 1358–73. http://dx.doi.org/10.51594/csitrj.v5i6.1201.

Full text
Abstract:
This paper explores the application of predictive analytics and machine learning techniques to enhance credit assessment and lending practices. By leveraging alternative data sources, such as mobile phone usage, social media activity, and transactional records, machine learning models can provide more accurate credit risk evaluations for individuals with limited traditional financial histories. The study demonstrates the efficacy of these models through empirical analysis, showcasing their potential to reduce default rates while increasing the approval rates for credit applicants. Furthermore, the paper discusses the ethical considerations and potential biases associated with the use of non-traditional data in credit scoring. The findings underscore the transformative impact of machine learning in fostering financial inclusion, offering practical insights for policymakers, financial institutions, and technology developers aiming to bridge the credit gap for under banked communities. This paper delves into the transformative potential of predictive analytics and machine learning in enhancing financial inclusion by improving credit access for under banked populations. Traditional credit scoring methods often fail to accurately assess the creditworthiness of individuals lacking conventional financial histories, thereby excluding a significant portion of the population from financial services. By incorporating alternative data sources such as mobile phone usage, social media interactions, utility payments, and transactional records, machine learning models can offer more comprehensive and precise credit risk evaluations. The research methodology involves developing and testing various machine learning algorithms, including decision trees, random forests, and neural networks, to predict creditworthiness. The models are trained and validated on datasets that include both traditional financial data and alternative data sources. The performance of these models is measured against standard metrics such as accuracy, precision, recall, and the area under the receiver operating characteristic (ROC) curve. Empirical results indicate that models utilizing alternative data significantly outperform traditional credit scoring methods, leading to higher approval rates for credit applicants while maintaining or improving risk management standards. Keywords: Financial, Inclusion, Predictive, Analytics, Machine Learning, Alternative Data.
APA, Harvard, Vancouver, ISO, and other styles
8

Padilla, Rubelyn C. "Assessment of Library Users’ Problems on Transactional Procedures: Basis for Library Management System Development." International Journal of Scientific and Management Research 05, no. 06 (2022): 10–17. http://dx.doi.org/10.37502/ijsmr.2022.5602.

Full text
Abstract:
This research aimed at assessing problems encountered by library users- librarian, staff, students, and teachers- on the different library transactional procedures as basis for the development and design of a Library Management System for Cagayan State University-Piat in the Philippines. The study utilized the descriptive design in determining the degree of seriousness of the problems encountered by the respondents in using the existing library transactions, and the interventions that can be done to address the problems encountered by the respondents. Findings revealed that the problems encountered by the library users in the manual operations of the library in terms of borrowing, returning and searching library materials are “Very Serious”. On the part of the library staff, the issues on security of records, cataloguing, borrowing, returning, searching, inventory of library materials and generation of reports are considered “Serious”. Using Waterfall Model, the system was developed with the aid of software and hardware requirements. The library system developed “very efficiently” stored the library records in the database secured with password, systematically classified the materials, save time in entering information of library materials, displayed for duplication of accession numbers, monitored the borrowed and returned books, systematically displayed inventory of materials, and saved time in generating accurate library reports. It can be concluded that the proposed library system can provide better and easier access to the different transactions in the library and provide convenience to the library staff and library users in the different transactions.
APA, Harvard, Vancouver, ISO, and other styles
9

Welekar, Rashmi, Farhadeeba Shaikh, Abhijit Chitre, Kirti Wanjale, Shabana Pathan, and Anil Kumar. "An advanced cloud based framework for privacy and security in medical data using cryptographic method." Journal of Discrete Mathematical Sciences and Cryptography 26, no. 5 (2023): 1585–96. http://dx.doi.org/10.47974/jdmsc-1826.

Full text
Abstract:
The exchange of medical information has been drastically altered by patient-centered developments such as personal health records (PHR). By giving patients a place to handle their own PHR on a unified transactional platform, personal health record (PHR) services increase the efficiency with which medical information may be kept, accessed, and transferred. With the ultimate objective of providing patients with total surveillance under data, our findings is focused on creating a state-of-the-art infrastructure for the safe transfer of personal health data via cloud computing. Patients have the option of encrypting their PHR files, which provides an additional layer of security and allows them to set access control limits such as who has access to their files and to what degree. When data is encrypted in the cloud, only approved users may access it. Using cloud-based platforms to share health records raises concerns over confidentiality and privacy, which are addressed by the proposed method. Patients may still benefit from data interchange for the goal of better healthcare thanks to the framework’s provision of an encrypted PHR file option. This framework may accommodate attribute-based encryption (ABE) and other kinds of granular security. These measures ensure that people may continue to have access to, and make changes to, their own medical data, even when they are stored on the cloud. This article presents research that attempts to meet the demands of patients while also providing a safe method of transferring individual health information through cloud computing.
APA, Harvard, Vancouver, ISO, and other styles
10

Mora, E., M. Gemmani, J. Zayas-Castro, and D. A. Martinez. "Uncovering Hospitalists’ Information Needs from Outside Healthcare Facilities in the Context of Health Information Exchange Using Association Rule Learning." Applied Clinical Informatics 06, no. 04 (2015): 684–97. http://dx.doi.org/10.4338/aci-2015-06-ra-0068.

Full text
Abstract:
SummaryBackground: Important barriers to health information exchange (HIE) adoption are clinical work-flow disruptions and troubles with the system interface. Prior research suggests that HIE interfaces providing faster access to useful information may stimulate use and reduce barriers for adoption; however, little is known about informational needs of hospitalists.Objective: To study the association between patient health problems and the type of information requested from outside healthcare providers by hospitalists of a tertiary care hospital.Methods: We searched operational data associated with fax-based exchange of patient information (previous HIE implementation) between hospitalists of an internal medicine department in a large urban tertiary care hospital in Florida, and any other affiliated and unaffiliated healthcare provider. All hospitalizations from October 2011 to March 2014 were included in the search. Strong association rules between health problems and types of information requested during each hospitalization were discovered using Apriori algorithm, which were then validated by a team of hospitalists of the same department.Results: Only 13.7% (2 089 out of 15 230) of the hospitalizations generated at least one request of patient information to other providers. The transactional data showed 20 strong association rules between specific health problems and types of information exist. Among the 20 rules, for example, abdominal pain, chest pain, and anaemia patients are highly likely to have medical records and outside imaging results requested. Other health conditions, prone to have records requested, were lower urinary tract infection and back pain patients.Conclusions: The presented list of strong co-occurrence of health problems and types of information requested by hospitalists from outside healthcare providers not only informs the implementation and design of HIE, but also helps to target future research on the impact of having access to outside information for specific patient cohorts. Our data-driven approach helps to reduce the typical biases of qualitative research.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Tianyu, Matthew Butrovich, Amadou Ngom, Wan Shen Lim, Wes McKinney, and Andrew Pavlo. "Mainlining databases." Proceedings of the VLDB Endowment 14, no. 4 (December 2020): 534–46. http://dx.doi.org/10.14778/3436905.3436913.

Full text
Abstract:
The proliferation of modern data processing tools has given rise to open-source columnar data formats. These formats help organizations avoid repeated conversion of data to a new format for each application. However, these formats are read-only, and organizations must use a heavy-weight transformation process to load data from on-line transactional processing (OLTP) systems. As a result, DBMSs often fail to take advantage of full network bandwidth when transferring data. We aim to reduce or even eliminate this overhead by developing a storage architecture for in-memory database management systems (DBMSs) that is aware of the eventual usage of its data and emits columnar storage blocks in a universal open-source format. We introduce relaxations to common analytical data formats to efficiently update records and rely on a lightweight transformation process to convert blocks to a read-optimized layout when they are cold. We also describe how to access data from third-party analytical tools with minimal serialization overhead. We implemented our storage engine based on the Apache Arrow format and integrated it into the NoisePage DBMS to evaluate our work. Our experiments show that our approach achieves comparable performance with dedicated OLTP DBMSs while enabling orders-of-magnitude faster data exports to external data science and machine learning tools than existing methods.
APA, Harvard, Vancouver, ISO, and other styles
12

Mold, Freda, Beverley Ellis, Simon De Lusignan, Aziz Sheikh, Jeremy C. Wyatt, Mary Cavill, Georgios Michalakidis, et al. "The provision and impact of online patient access to their electronic health records (EHR) and transactional services on the quality and safety of health care: systematic review protocol." Journal of Innovation in Health Informatics 20, no. 4 (September 27, 2013): 271–82. http://dx.doi.org/10.14236/jhi.v20i4.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kosadi, Ferry, and Wajib Ginting. "SOCIALIZATION OF FINANCIAL ACCOUNTING STANDARDS FOR MICRO, SMALL AND MEDIUM ENTITIES (SAK EMKM) AND WORKSHOP ON SPREADSHEET APPLICATIONS FOR FINANCIAL REPORTS." Inaba of Community Services Journal ( Inacos-J) 1, no. 1 (June 30, 2022): 30–47. http://dx.doi.org/10.56956/inacos.v1i1.32.

Full text
Abstract:
The purpose of this activity is to increase knowledge and understanding of formal financial reports in accordance with SAK EMKM through socialization in the process of accounting records and preparation of financial reports as well as training using spreadsheet application software to compile financial reports with a transactional approach commonly used by business actors, namely cash disbursements and receipts with automation. the process of recording inventories, fixed assets and preparing formal financial reports in the form of Profit and Loss and Financial Position (Balance Sheet). Initial Stages of Activities, conducting several interview processes to find out the production, buying and selling processes as well as recording habits including recording fixed assets and inventories. Then proceed with the preparation of socialization materials for recording and financial reporting based on SAK EMKM and compiling an application for recording transactions with automation in making financial reports using Spreadsheet Software, namely Microsoft Excel with VBA and Macro. Furthermore, the Socialization of Recording and Preparation of Financial Reports based on SAK EMKM and Workshop on the Use of Spreadsheet Applications in Making Financial Reports was carried out. The results show that the majority of MSME actors tend not to register formally and do not make financial reports based on SAK EMKM. The preparation of financial statements is carried out when there is a need for legality related to access to funding by involving third parties in its preparation.
APA, Harvard, Vancouver, ISO, and other styles
14

Boen, Courtney E., Rebecca Anna Schut, and Nick Graetz. "The Painful and Chilling Effects of Legal Violence: Immigration Enforcement and Racialized Legal Status Inequities in Worker Well-Being." Population Research and Policy Review 43, no. 2 (March 13, 2024). http://dx.doi.org/10.1007/s11113-024-09862-x.

Full text
Abstract:
AbstractA wave of restrictive immigration policies implemented over the past several decades dramatically increased immigrant detentions and deportations in the United States (U.S.), with important consequences for a host of immigrant outcomes. Still, questions remain as to how temporal and geographic variation in immigration enforcement within and across the U.S. shaped racialized legal status inequities in health and well-being, particularly among those employed in precarious occupations. To fill this gap, we interrogated the links between changes in county-level immigration enforcement and racialized legal status inequalities in musculoskeletal pain and social welfare benefits utilization among U.S. agricultural workers over nearly two decades (2002–2018). We merged data from three sources [(1) restricted-access, geocoded data from the National Agricultural Workers Survey (NAWS) (n = 37,619); (2) county-level immigration enforcement data from the Transactional Records Access Clearinghouse (TRAC); and (3) population data from the Census and American Community Survey (ACS)] and estimated linear probability models with year, month, and state fixed effects. We show that, in counties with high enforcement rates, workers—especially undocumented workers—were at increased risk of musculoskeletal pain, including pain that was severe. Heightened enforcement was also associated with declines in needs-based benefits utilization, especially among documented and U.S.-citizen non-White workers and undocumented White and non-White workers. Together, these findings highlight how changes in sociopolitical and legal contexts can shift and maintain racialized legal status hierarchies, with especially important consequences for the well-being of vulnerable workers.
APA, Harvard, Vancouver, ISO, and other styles
15

Bruzelius, Emilie, and Silvia S. Martins. "Recreational cannabis legalization and immigration enforcement: a state-level analysis of arrests and deportations in the United States, 2009–2020." BMC Public Health 24, no. 1 (April 1, 2024). http://dx.doi.org/10.1186/s12889-024-18334-y.

Full text
Abstract:
Abstract Background Recreational cannabis laws (RCL) in the United States (US) can have important implications for people who are non-citizens, including those with and without formal documentation, and those who are refugees or seeking asylum. For these groups, committing a cannabis-related infraction, even a misdemeanor, can constitute grounds for status ineligibility, including arrest and deportation under federal immigration policy—regardless of state law. Despite interconnections between immigration and drug policy, the potential impacts of increasing state cannabis legalization on immigration enforcement are unexplored. Methods In this repeated cross-sectional analysis, we tested the association between state-level RCL adoption and monthly, state-level prevalence of immigration arrests and deportations related to cannabis possession. Data were from the Transactional Records Access Clearinghouse. Immigration arrest information was available from Oct-2014 to May-2018 and immigration deportation information were available from Jan-2009 to Jun-2020 for. To test associations with RCLs, we fit Poisson fixed effects models that controlled for pre-existing differences between states, secular trends, and potential sociodemographic, sociopolitical, and setting-related confounders. Sensitivity analyses explored potential violations to assumptions and sensitivity to modeling specifications. Results Over the observation period, there were 7,739 immigration arrests and 48,015 deportations referencing cannabis possession. By 2020, 12 stated adopted recreational legalization and on average immigration enforcement was lower among RCL compared to non-RCL states. In primary adjusted models, we found no meaningful changes in arrest prevalence, either immediately following RCL adoption (Prevalence Ratio [PR]: 0.84; [95% Confidence Interval [CI]: 0.57, 1.11]), or 1-year after the law was effective (PR: 0.88 [CI: 0.56, 1.20]). For the deportation outcome, however, RCL adoption was associated with a moderate relative decrease in deportation prevalence in RCL versus non-RCL states (PR: 0.68 [CI: 0.56, 0.80]; PR 1-year lag: 0.68 [CI: 0.54, 0.82]). Additional analyses were mostly consistent by suggested some sensitivities to modeling specification. Conclusions Our findings suggest that decreasing penalties for cannabis possession through state RCLs may reduce some aspects of immigration enforcement related to cannabis possession. Greater attention to the immigration-related consequences of current drug control policies is warranted, particularly as more states weigh the public health benefits and drawbacks of legalizing cannabis.
APA, Harvard, Vancouver, ISO, and other styles
16

Balavenkataraman Kadhirvelu, Vishnukumar, Kessy Abarenkov, Allan Zirk, Joana Paupério, Guy Cochrane, Suran Jayathilaka, Olaf Bánki, et al. "Enabling Community Curation of Biological Source Annotations of Molecular Data Through PlutoF and the ELIXIR Contextual Data Clearinghouse." Biodiversity Information Science and Standards 6 (August 23, 2022). http://dx.doi.org/10.3897/biss.6.93595.

Full text
Abstract:
The advancements in sequencing technologies have greatly contributed to the documentation of Earth’s biodiversity. However, for exploring the full potential of molecular resources for biodiversity, there needs to be a good linkage between sequence data and its biological source, contributing to a network of connected data in the biodiversity research cycle. This requires a foundation of well-structured and accessible annotations in the molecular sequence repositories. The International Nucleotide Sequence Database Collaboration (INSDC), of which the European Nucleotide Archive (ENA) is its European node, holds a large amount of annotations associated with sequence data, relating to its biological source (e.g., specimens in natural history collections). However, for a number of records, these annotations may be incomplete (e.g., missing voucher information), ambiguous or even inaccurate. Therefore, we have implemented a workflow that allows third-party annotations to be attached to sequence and sample records using two existing services, the PlutoF platform and the ELIXIR Contextual Data ClearingHouse. This work was developed within the scope of the BiCIKL (Biodiversity Community Integrated Knowledge Library) project, which aims to establish open science practices in the biodiversity domain. PlutoF is an online data management platform that also provides computing services for biology-related research. PlutoF features allow registered users to enter their own data and access public data at INSDC. Users can enter and manage a range of data, as taxonomic classifications, occurrences, etc. This platform also includes a module that allows the addition of third-party annotations (on material source, taxonomic identification, etc.) linked to specimens or sequence records. This module was already in use by the UNITE community for annotation of INSDC rDNA Internal Transcribed Spacer sequence datasets (Abarenkov et al. 2021). These UNITE annotations are displayed in the National Centre for Biotechnology Information (NCBI) records through links to the PlutoF platform. However, there was the need for an automated solution that allowed third-party annotations to any sequence or sample record at INSDC. This was implemented through the operation of the ELIXIR Contextual Data ClearingHouse (hereafter as Clearinghouse). The Clearinghouse holds a simple RESTful Application Programming Interface (API) to support the submission of additions and improvements to current metadata attributes, such as information on material sources, on records publicly available in the ELIXIR data resources. The Clearinghouse enables the submission of these corrected metadata from databases (such as the PlutoF platform) to the primary data repositories. The workflow developed is shown in Fig. 1 and consists of the following steps: i) users annotate sequence metadata that is regularly downloaded from INSDC using NCBI’s E-utilities; ii) an annotation proposal is created and a verification notification is sent to an assigned reviewer; iii) the reviewer evaluates the annotation proposal and accepts it or rejects it with comments; iv) if the annotation proposal is accepted, the annotated fields that may be mapped to ENA fields are then pushed to the Clearinghouse using their RESTful API. The annotations when received at ENA are then reviewed before being displayed. This workflow is implemented through a web interface in PlutoF, which allows user-friendly and effortless reporting of corrections or additions to biological source metadata in sequence records. Overall, we expect this tool to contribute to the enrichment of metadata associated with sequence records, and therefore increase the links between the molecular and biodiversity resources, and enable sequencing data to deliver their full potential for biodiversity conservation.
APA, Harvard, Vancouver, ISO, and other styles
17

Rolan, Gregory, and Antonina Lewis. "The perpetual twilight of records: consentful recordkeeping as moral defence." Archival Science, April 26, 2024. http://dx.doi.org/10.1007/s10502-024-09438-w.

Full text
Abstract:
AbstractIn this article, we examine the significance of establishing participatory and consentful recordkeeping practice in the face of ubiquitous use of records beyond their original intent. Among such secondary uses is the decontextualisation of data as part of the 'industrialisation' of access and use of ‘historical’ records within current transactional contexts, together with a wide range of data sharing practices arising from contemporary data science paradigms. To situate the call to action for consentful recordkeeping practice, we begin the article by exploring how human ability to navigate through the perpetual twilight of records becomes increasingly murky when a wholesale approach to data collection and governance is applied by machine learning practitioners. We then re-frame some classical archival principles to align them with participatory approaches; specifically, by expanding the scope of Jenkinsonian ‘moral defence’ as an imperative for proactive engagement with the Archival Multiverse. We then describe a case study of consentful recordkeeping in practice, using the example of the AiLECS Lab’s newly developed collection acquisition and management system. This principles-based framework informs our practices for collecting and curating datasets for machine learning research and development and aims to privilege the ongoing consent of those represented in records to their use. In the context of this work, our core premise is that technologies designed to prevent exploitation of children should aim to avoid underlying data practices that are themselves exploitative (of children or adults).
APA, Harvard, Vancouver, ISO, and other styles
18

Poon, Neo, Claire Haworth, Elizabeth Dolan, and Anya Skatova. "Studying Health and Illness Experience using Linked Data (SHIELD): Empowering customers to donate shopping data for chronic pain research." International Journal of Population Data Science 9, no. 4 (June 10, 2024). http://dx.doi.org/10.23889/ijpds.v9i4.2420.

Full text
Abstract:
Introduction & BackgroundChronic pain is considered a priority in healthcare and a threat to well-being across the globe, it is thus crucial to accurately measure the national levels of pain conditions and their impacts on workplace productivity and well-being. Chronic pain has traditionally been studied in isolation with either self-reported survey data or standalone shopping records. The former are limited in scale and can be marred by response biases, while the latter lack ‘ground truths’: what research teams can measure are usually the purchase patterns of pain relief products, but neither the severity nor types of pain conditions. Objectives & ApproachData donation tools offer a novel approach to study chronic pain by linking the two aspects and establish statistical relationships between medicine consumptions and the multiple facets of pain experience. In a survey, we asked participants (N = 953) to share their loyalty card data with us, which is made possible with the data portability tool provided by Tesco (i.e., the largest supermarket chain in the United Kingdom) as part of the General Data Protection Regulation (GDPR). Based on questions adopted from popular inventories used in health research (e.g., EQ5D Health States, ONS4 Well-being, WEMWBS scales), we also asked participants to report the details of their pain conditions, hours of employment, and both general and mental health states. This allowed us to associate chronic pain - both subjective and objective (i.e., reflected by medicine consumption) - with its economic and personal consequences. Data collection was conducted via research panel providers, thus should approximate national representativeness. Relevance to Digital FootprintsThis work links digital footprints data donated by individuals to self-reported survey data, also develops an infrastructure for these data to be collected and safely stored. Conclusions & ImplicationsOne key value of this project is to pioneer a measure of chronic pain that can be applied to transactional records that are much bigger in scale in future analytic works. Our research team has access to an array of different digital footprints data, including longitudinal transactional data provided by a major pharmacy chain (~20 million customers and ~429 million baskets). In order to utilise these data to associate them with regional workplace productivity measures and well-being data released by the Office for National Statistics, a metric must be defined to extract the prevalence of chronic pain from shopping data, which is informed by the patterns found by the data donation project.
APA, Harvard, Vancouver, ISO, and other styles
19

Hinton, Lisa, Karolina Kuberska, Francesca Dakin, Nicola Boydell, Graham Martin, Tim Draycott, Cathy Winter, et al. "A qualitative study of the dynamics of access to remote antenatal care through the lens of candidacy." Journal of Health Services Research & Policy, April 21, 2023, 135581962311653. http://dx.doi.org/10.1177/13558196231165361.

Full text
Abstract:
Objective We aimed to explore the experiences and perspectives of pregnant women, antenatal healthcare professionals, and system leaders to understand the impact of the implementation of remote provision of antenatal care during the COVID-19 pandemic and beyond. Methods We conducted a qualitative study involving semi-structured interviews with 93 participants, including 45 individuals who had been pregnant during the study period, 34 health care professionals, and 14 managers and system-level stakeholders. Analysis was based on the constant comparative method and used the theoretical framework of candidacy. Results We found that remote antenatal care had far-reaching effects on access when understood through the lens of candidacy. It altered women’s own identification of themselves and their babies as eligible for antenatal care. Navigating services became more challenging, often requiring considerable digital literacy and sociocultural capital. Services became less permeable, meaning that they were more difficult to use and demanding of the personal and social resources of users. Remote consultations were seen as more transactional in character and were limited by lack of face-to-face contact and safe spaces, making it more difficult for women to make their needs – both clinical and social – known, and for professionals to assess them. Operational and institutional challenges, including problems in sharing of antenatal records, were consequential. There were suggestions that a shift to remote provision of antenatal care might increase risks of inequities in access to care in relation to every feature of candidacy we characterised. Conclusion It is important to recognise the implications for access to antenatal care of a shift to remote delivery. It is not a simple swap: it restructures many aspects of candidacy for care in ways that pose risks of amplifying existing intersectional inequalities that lead to poorer outcomes. Addressing these challenges through policy and practice action is needed to tackle these risks.
APA, Harvard, Vancouver, ISO, and other styles
20

Rios, Nelson, Sharif Islam, James Macklin, and Andrew Bentley. "Technical Considerations for a Transactional Model to Realize the Digital Extended Specimen." Biodiversity Information Science and Standards 5 (September 3, 2021). http://dx.doi.org/10.3897/biss.5.73812.

Full text
Abstract:
Technological innovations over the past two decades have given rise to the online availability of more than 150 million specimen and species-lot records from biological collections around the world through large-scale biodiversity data-aggregator networks. In the present landscape of biodiversity informatics, collections data are captured and managed locally in a wide variety of databases and collection management systems and then shared online as point-in-time Darwin Core archive snapshots. Data providers may publish periodic revisions to these data files, which are retrieved, processed and re-indexed by data aggregators. This workflow has resulted in data latencies and lags of months to years for some data providers. The Darwin Core Standard Wieczorek et al. (2012) provides guidelines for representing biodiversity information digitally, yet varying institutional practices and lack of interoperability between Collection Management Systems continue to limit semantic uniformity, particularly with regard to the actual content of data within each field. Although some initiatives have begun to link data elements, our ability to comprehensively link all of the extended data associated with a specimen, or related specimens, is still limited due to the low uptake and usage of persistent identifiers. The concept now under consideration is to create a Digital Extended Specimen (DES) that adheres to the tenets of Findable, Accessible, Interoperable and Reusable (FAIR) data management of stewardship principles and is the cumulative digital representation of all data, derivatives and products associated with a physical specimen, which are individually distinguished and linked by persistent identifiers on the Internet to create a web of knowledge. Biodiversity data aggregators that mobilize data across multiple institutions routinely perform data transformations in an attempt to provide a clean and consistent interpretation of the data. These aggregators are typically unable to interact directly with institutional data repositories, thereby limiting potentially fruitful opportunities for annotation, versioning, and repatriation. The ability to track such data transactions and satisfy the accompanying legal implications (e.g. Nagoya Protocol) is becoming a necessary component of data publication which existing standards do not adequately address. Furthermore, no mechanisms exist to assess the “trustworthiness” of data, critical to scientific integrity, reproducibility or to provide attribution metrics for collections to advocate for their contribution or effectiveness in supporting such research. Since the introduction of Darwin Core Archives Wieczorek et al. (2012) little has changed in the underlying mechanisms for publishing natural science collections data and we are now at a point where new innovations are required to meet current demand for continued digitization, access, research and management. One solution may involve changing the biodiversity data publication paradigm to one based on the atomized transactions relevant to each individual data record. These transactions, when summed over time, allows us us to realize the most recently accepted revision as well as historical and alternative perspectives. In order to realize the Digital Extended Specimen ideals and the linking of data elements, this transactional model combined with open and FAIR data protocols, application programming interfaces (APIs), repositories, and workflow engines can provide the building blocks for the next generation of natural science collections and biodiversity data infrastructures and services. These and other related topics have been the focus of phase 2 of the global consultation on converging Digital Specimens and Extended Specimens. Based on these discussions, this presentation will explore a conceptual solution leveraging elements from distributed version control, cryptographic ledgers and shared redundant storage to overcome many of the shortcomings of contemporary approaches.
APA, Harvard, Vancouver, ISO, and other styles
21

Bila, Eleni, John Derrick, Simon Doherty, Brijesh Dongol, Gerhard Schellhorn, and Heike Wehrheim. "Modularising Verification Of Durable Opacity." Logical Methods in Computer Science Volume 18, Issue 3 (July 28, 2022). http://dx.doi.org/10.46298/lmcs-18(3:7)2022.

Full text
Abstract:
Non-volatile memory (NVM), also known as persistent memory, is an emerging paradigm for memory that preserves its contents even after power loss. NVM is widely expected to become ubiquitous, and hardware architectures are already providing support for NVM programming. This has stimulated interest in the design of novel concepts ensuring correctness of concurrent programming abstractions in the face of persistency and in the development of associated verification approaches. Software transactional memory (STM) is a key programming abstraction that supports concurrent access to shared state. In a fashion similar to linearizability as the correctness condition for concurrent data structures, there is an established notion of correctness for STMs known as opacity. We have recently proposed durable opacity as the natural extension of opacity to a setting with non-volatile memory. Together with this novel correctness condition, we designed a verification technique based on refinement. In this paper, we extend this work in two directions. First, we develop a durably opaque version of NOrec (no ownership records), an existing STM algorithm proven to be opaque. Second, we modularise our existing verification approach by separating the proof of durability of memory accesses from the proof of opacity. For NOrec, this allows us to re-use an existing opacity proof and complement it with a proof of the durability of accesses to shared state.
APA, Harvard, Vancouver, ISO, and other styles
22

Buschbom, Jutta, Breda Zimkus, Andrew Bentley, Mariko Kageyama, Christopher Lyal, Dirk Neumann, Andra Waagmeester, and Alex Hardisty. "Participative Decision Making and the Sharing of Benefits: Laws, ethics, and data protection for building extended global communities." Biodiversity Information Science and Standards 5 (September 14, 2021). http://dx.doi.org/10.3897/biss.5.75168.

Full text
Abstract:
Transdisciplinary and cross-cultural cooperation and collaboration are needed to build extended, densely interconnected information resources. These are the prerequisites for the successful implementation and execution of, for example, an ambitious monitoring framework accompanying the post-2020 Global Biodiversity Framework (GBF) of the Convention on Biological Diversity (CBD; SCBD 2021). Data infrastructures that meet the requirements and preferences of concerned communities can focus and attract community involvement, thereby promoting participatory decision making and the sharing of benefits. Community acceptance, in turn, drives the development of the data resources and data use. Earlier this year, the alliance for biodiversity knowledge (2021a) conducted forum-based consultations seeking community input on designing the next generation of digital specimen representations and consequently enhanced infrastructures. The multitudes of connections that arise from extending the digital specimen representations through linkages in all “directions” will form a powerful network of information for research and application. Yet, with the power of an extended, accessible data network comes the responsibility to protect sensitive information (e.g., the locations of threatened populations, culturally context-sensitive traditional knowledge, or businesses’ fundamental data and infrastructure assets). In addition, existing legislation regulates access and the fair and equitable sharing of benefits. Current negotiations on ‘Digital Sequence Information’ under the CBD suggest such obligations might increase and become more complex in the context of extensible information networks. For example, in the case of data and resources funded by taxpayers in the EU, such access should follow the general principle of being “as open as possible; as closed as is legally necessary” (cp. EC 2016). At the same time, the international regulations of the CBD Nagoya Protocol (SCBD 2011) need to be taken into account. Summarizing main outcomes from the consultation discussions in the forum thread “Meeting legal/regulatory, ethical and sensitive data obligations” (alliance for biodiversity knowledge 2021b), we propose a framework of ten guidelines and functionalities to achieve community building and drive application: Substantially contribute to the conservation and protection of biodiversity (cp. EC 2020). Use language that is CBD conformant. Show the importance of the digital and extensible specimen infrastructure for the continuing design and implementation of the post-2020 GBF, as well as the mobilisation and aggregation of data for its monitoring elements and indicators. Strive to openly publish as much data and metadata as possible online. Establish a powerful and well-thought-out layer of user and data access management, ensuring security of ‘sensitive data’. Encrypt data and metadata where necessary at the level of an individual specimen or digital object; provide access via digital cryptographic keys. Link obligations, rights and cultural information regarding use to the digital key (e.g. CARE principles (Carroll et al. 2020), Local Context-labels (Local Contexts 2021), licenses, permits, use and loan agreements, etc.). Implement a transactional system that records every transaction. Amplify workforce capacity across the digital realm, its work areas and workflows. Do no harm (EC 2020): Reduce the social and ecological footprint of the implementation, aiming for a long-term sustainable infrastructure across its life-cycle, including development, implementation and management stages. Substantially contribute to the conservation and protection of biodiversity (cp. EC 2020). Use language that is CBD conformant. Show the importance of the digital and extensible specimen infrastructure for the continuing design and implementation of the post-2020 GBF, as well as the mobilisation and aggregation of data for its monitoring elements and indicators. Strive to openly publish as much data and metadata as possible online. Establish a powerful and well-thought-out layer of user and data access management, ensuring security of ‘sensitive data’. Encrypt data and metadata where necessary at the level of an individual specimen or digital object; provide access via digital cryptographic keys. Link obligations, rights and cultural information regarding use to the digital key (e.g. CARE principles (Carroll et al. 2020), Local Context-labels (Local Contexts 2021), licenses, permits, use and loan agreements, etc.). Implement a transactional system that records every transaction. Amplify workforce capacity across the digital realm, its work areas and workflows. Do no harm (EC 2020): Reduce the social and ecological footprint of the implementation, aiming for a long-term sustainable infrastructure across its life-cycle, including development, implementation and management stages. Balancing the needs for open access, as well as protection, accountability and sustainability, the framework is designed to function as a robust interface between the (research) infrastructure implementing the extensible network of digital specimen representations, and the myriad of applications and operations in the real world. With the legal, ethical and data protection layers of the framework in place, the infrastructure will provide legal clarity and security for data providers and users, specifically in the context of access and benefit sharing under the CBD and its Nagoya Protocol. Forming layers of protection, the characteristics and functionalities of the framework are envisioned to be flexible and finely-grained, adjustable to fulfill the needs and preferences of a wide range of stakeholders and communities, while remaining focused on the protection and rights of the natural world. Respecting different value systems and national policies, the framework is expected to allow a divergence of views to coexist and balance differing interests. Thus, the infrastructure of the digital extensible specimen network is fair and equitable to many providers and users. This foundation has the capacity and potential to bring together the diverse global communities using, managing and protecting biodiversity.
APA, Harvard, Vancouver, ISO, and other styles
23

Olujobi, Olusola Joshua, and Ebenezer Tunde Yebisi. "Combating the crimes of money laundering and terrorism financing in Nigeria: a legal approach for combating the menace." Journal of Money Laundering Control, February 17, 2022. http://dx.doi.org/10.1108/jmlc-12-2021-0143.

Full text
Abstract:
Purpose This study aims to investigate the Federal Government’s failure to combat money laundering and terrorism financing and the various hurdles to enforce the Money Laundering (Prohibition) Act, 2012 (as amended), effectively, which prohibits illegal earnings criminally induced investments in and out of Nigeria. This has had an impact on the country’s economic potential and its image in the international community. Despite many anti-corruption laws criminalising money laundering and terrorism financing, it is rated among the nations with the highest poverty index despite its immense natural resources. Design/methodology/approach This study uses a conceptual legal method to help a doctrinal library-based investigation by using existing material. This study also makes use of main and secondary legislation, such as the Constitution, the Money Laundering (Prohibition) (Amended) Act 2012 and the Terrorism (Prevention) Act 2013 (as amended), as well as case law, international conventions, textbooks and peer-reviewed publications. A comparison of anti-money laundering legislation in Canada, the UK, Hong Kong, China and Nigeria was conducted, with lessons learned for Nigeria’s anti-money laundering and anti-terrorism financing laws. According to the findings, the Act is silent on the criminal use of legitimate earnings to fund terrorism and cultism. Findings There is no well-defined legal framework for asset recovery and confiscation. In Nigeria’s legal system, this evident void must be addressed immediately. To supplement existing efforts to prevent money laundering, the research develops a hybrid model that incorporates the inputs of government representatives and civil society organisations. This study suggests a complete revision of the Act to eliminate ambiguity and focus on the goals of global anti-money laundering and anti-terrorist funding restrictions. Research limitations/implications One of the limitations of this study is the paucity of literature and data on money laundering and terrorist financing in Nigeria due to the secrecy around the crimes, which do not give room for the collection of statistical data and due to the transactional nature of the crimes. This is not to submit that no attempts have been made in the past or recent times to quantify the global value of money laundering and its effects on Nigeria’s economy. Such attempts have been inconclusive and inaccurate. Practical implications The dearth of records on the magnitude of money laundering in Nigeria has limited generalising the research findings due to the limited access to some required information. However, this study is suitable for adoption in other sectors of the economy in dealing with clandestineness in money laundering and terrorism financing. Future researchers are commended to use the quantitative assessment method to appraise the effects of money laundering and terrorist financing laws and policies in Africa to supplement the current literature in the field. Originality/value The research develops a hybrid model that incorporates the inputs of government representatives and civil society organisations. This study suggests a complete revision of the Act to eliminate ambiguity and focus on the goals of global anti-money laundering and anti-terrorist funding restrictions.
APA, Harvard, Vancouver, ISO, and other styles
24

Grosjean, Marie, Morten Høfft, Marcos Gonzalez, Tim Robertson, and Andrea Hahn. "GRSciColl: Registry of Scientific Collections maintained by the community for the community." Biodiversity Information Science and Standards 5 (September 13, 2021). http://dx.doi.org/10.3897/biss.5.74354.

Full text
Abstract:
GRSciColl, the Registry of Scientific Collections, is a comprehensive, community-curated clearinghouse of collections information originally developed by the Consortium of the Barcode of Life (CBOL) and hosted by the Smithsonian Institution until 2019. It is now hosted and maintained in the Global Biodiversity Information Facility (GBIF) registry (see this news item). GRSciColl aims to improve access to information about institutions, the scientific collections they hold, and to facilitate access to the staff members who manage them. Anyone can use GRSciColl to search for collections based on their attributes (country, preservation type, etc.) as well as their codes and identifiers. These users will find information on what the collections contain, where they are located, who manages them and how to get into contact. Furthermore, institutions can use GRSciColl to be more visible and advertise their collections, both digitized and undigitized. Plus, the ability to get an overview of institutions and collections by country can help guide some of the data mobilisation efforts by national organizations. Finally, GRSciColl is a reference for institution and collections codes and identifiers, which can enable links from other systems (as exemplified in GBIF.org) and make the information more easily available. Engaging the community is crucial in maintaining that information. After the migration to GBIF, the first phase of development focused on data consolidation, integration with external systems, and on providing the necessary functionality and safeguards to move from a centrally maintained to a community curated system. With all these in place, the focus is now shifting to expanding the community of data editors, and to understanding how best to serve user needs. It can be difficult for institutions to maintain information in the various available data repositories. This is why we aim to synchronize GRSciColl with as many reliable sources as possible. In 2020, we set up weekly synchronization with Index Herbariorum, and we will be exploring synchronization with other sources such as the Consortium of European Taxonomic Facilities (CETAF) registry and the National Center for Biotechnology Information (NCBI) BioCollections database. In addition, we worked with the team at Integrated Digitized Biocollections (iDigBio) to import their collection information into GRSciColl. The data are now maintained in the GBIF registry and displayed on the iDigBio portal via the GRSciColl Application Programming Interface (API). The GRSciColl new permission model aims to facilitate community curation. Anyone can suggest updates, and those changes can be applied or discarded by the appropriate reviewers: institution editors, country mediators, or administrators. With these changes in place, in 2021, we reached out to the GBIF Network to increase our pool of editors. Many GBIF Node managers are now involved in the curation of GRSciColl, and we are planning to likewise include applicants for the GBIF-managed funding programs such as “Biodiversity Information for Development'' (BID) and “Biodiversity Information Fund for Asia” (BIFA). We also work with external collaborators, such as the Biodiversity Crisis Response Committee of the Society for the Preservation of Natural History Collections (SPNCH), to reach outside of the GBIF community. Alongside the support for data integration and curation, a second important aspect is the support for data use. The information available needs to be both accessible and relevant to the community. Specimen-related occurrences, published on GBIF.org, are cross-linked to GRSciColl entries whenever possible (see this example). As these links make use of collection and institution identifiers within individual specimen records, rather than relying on dataset entities, this procedure allows aggregation of specimen-related occurrences under their GRSciColl-registered collections and institutions, regardless of the way they were published on GBIF. This can help users and institutions get an overview of the collections digitization progress, whether through their own initiative, or from datasets contributed by other data publishers. The Collections API is under ongoing development to provide better ways to access the GRSciColl information: more filters, a way to download the result of a search, and an API (Application Programming Interface) lookup service to find institutions and collections associated with a given code or identifier. The latter was designed to improve database interoperability. By working together with the community, we want to ensure GRSciColl becomes and remains a tool they can rely on. There are many ways to get involved with GRSciColl: Anyone can check their institution and collection entries and suggest updates or additions via the suggestion buttons in the GRSciColl interface. You can become a registry editor on behalf of your institution or collection. If you work with a National registry and are interested in sharing the data on GRSciColl, please contact us at scientific-collections@gbif.org. Tell us how you would like to use the registry and GRSciColl. You can contact us by email (scientific-collections@gbif.org) or via our GitHub repository. You can become a volunteer translator to make the GRSciColl forms accessible in more languages. You can follow our 2021 Roadmap and log your feedback and ideas via the GBIF feedback system or directly on our GitHub repository. Anyone can check their institution and collection entries and suggest updates or additions via the suggestion buttons in the GRSciColl interface. You can become a registry editor on behalf of your institution or collection. If you work with a National registry and are interested in sharing the data on GRSciColl, please contact us at scientific-collections@gbif.org. Tell us how you would like to use the registry and GRSciColl. You can contact us by email (scientific-collections@gbif.org) or via our GitHub repository. You can become a volunteer translator to make the GRSciColl forms accessible in more languages. You can follow our 2021 Roadmap and log your feedback and ideas via the GBIF feedback system or directly on our GitHub repository.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography