Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Kharkiv Open Data Web Portal.

Статті в журналах з теми "Kharkiv Open Data Web Portal"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Kharkiv Open Data Web Portal".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Kaspar, Frank, Frank Kratzenstein, and Andrea K. Kaiser-Weiss. "Interactive open access to climate observations from Germany." Advances in Science and Research 16 (May 28, 2019): 75–83. http://dx.doi.org/10.5194/asr-16-75-2019.

Повний текст джерела
Анотація:
Abstract. During recent years, Germany's national meteorological service (Deutscher Wetterdienst, DWD) has significantly expanded the open access to its climate observations. A first step was a simple FTP-site with the possibility for downloading archives with various categories of data, e.g. national and international station-based meteorological data, derived parameters, gridded products and special categories as e.g. phenological data. The data are based on the observing systems of DWD for Germany as well as international activities of DWD. To improve the interactive and user-friendly access to the data, a new portal has been developed. The portal serves a variety of user requirements that result from the broad range of applications of DWD's climate data. Here we provide an overview of the new climate data portal of DWD. It is based on a systematic implementation of OGC-based technologies. It allows easy graphical access to the station data, but also supports access via technical interfaces, esp. Web-Map- and Web-Feature-Services.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Saleh Aloufi, Khalid. "Generating RDF resources from web open data portals." Indonesian Journal of Electrical Engineering and Computer Science 16, no. 3 (December 1, 2019): 1521. http://dx.doi.org/10.11591/ijeecs.v16.i3.pp1521-1529.

Повний текст джерела
Анотація:
<span>Open data are available from various private and public institutions in different resource formats. There are already great number of open data that are published using open data portals, where datasets and resources are mainly presented in tabular or sheet formats. However, such formats have some barriers with application developments and web standards. One of the web recommenced standards for semantic web application is RDF. There are various research efforts have been focused on presenting open data in RDF formats. However, no framework has transformed tabular open data into RDFs considering the HTML tags and properties of the resources and datasets. Therefore, a methodology is required to generate RDF resources from this type of open data resources. This methodology applies data transformations of open data from a tabular format to RDF files for the Saudi Open Data Portal. The methodology successfully transforms open data resources in sheet format into RDF resources. Recommendations and future work are given to enhance the development of building open data.</span>
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Biswas, Nishan Kumar, and Faisal Hossain. "A scalable open-source web-analytic framework to improve satellite-based operational water management in developing countries." Journal of Hydroinformatics 20, no. 1 (November 28, 2017): 49–68. http://dx.doi.org/10.2166/hydro.2017.073.

Повний текст джерела
Анотація:
Abstract Two software development hurdles to advancing real-world operationalization of satellite datasets for water management are addressed in this study. First, a simple, easy-to-build and open-source web portal connecting to a back-end complex model is developed for resource-constrained developing nations. Second, to enhance the skill of satellite-based predictions, an innovative and dynamic web analytics-based correction system is developed to reduce the uncertainty of satellite estimates. The correction system comprises dynamic precipitation bias correction and streamflow correction. Dynamically web crawled in-situ hydrologic data pertaining to the region are used to estimate satellite estimation bias. These corrected datasets are finally shared through the web portal. On average, these dynamic correction techniques reduced root mean squared error in streamflow by 80–90% for the case of South Asian river basins. The take-home message is that it is now possible to build cost-effective operational web portals based on satellite data and non-proprietary software.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gordov, E. P., V. N. Lykosov, and A. Z. Fazliev. "Web portal on environmental sciences "ATMOS''." Advances in Geosciences 8 (June 6, 2006): 33–38. http://dx.doi.org/10.5194/adgeo-8-33-2006.

Повний текст джерела
Анотація:
Abstract. The developed under INTAS grant web portal ATMOS (http://atmos.iao.ru and http://atmos.scert.ru) makes available to the international research community, environmental managers, and the interested public, a bilingual information source for the domain of Atmospheric Physics and Chemistry, and the related application domain of air quality assessment and management. It offers access to integrated thematic information, experimental data, analytical tools and models, case studies, and related information and educational resources compiled, structured, and edited by the partners into a coherent and consistent thematic information resource. While offering the usual components of a thematic site such as link collections, user group registration, discussion forum, news section etc., the site is distinguished by its scientific information services and tools: on-line models and analytical tools, and data collections and case studies together with tutorial material. The portal is organized as a set of interrelated scientific sites, which addressed basic branches of Atmospheric Sciences and Climate Modeling as well as the applied domains of Air Quality Assessment and Management, Modeling, and Environmental Impact Assessment. Each scientific site is open for external access information-computational system realized by means of Internet technologies. The main basic science topics are devoted to Atmospheric Chemistry, Atmospheric Spectroscopy and Radiation, Atmospheric Aerosols, Atmospheric Dynamics and Atmospheric Models, including climate models. The portal ATMOS reflects current tendency of Environmental Sciences transformation into exact (quantitative) sciences and is quite effective example of modern Information Technologies and Environmental Sciences integration. It makes the portal both an auxiliary instrument to support interdisciplinary projects of regional environment and extensive educational resource in this important domain.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Luo, Yunhai, Benjamin C. Hitz, Idan Gabdank, Jason A. Hilton, Meenakshi S. Kagda, Bonita Lam, Zachary Myers, et al. "New developments on the Encyclopedia of DNA Elements (ENCODE) data portal." Nucleic Acids Research 48, no. D1 (November 12, 2019): D882—D889. http://dx.doi.org/10.1093/nar/gkz1062.

Повний текст джерела
Анотація:
Abstract The Encyclopedia of DNA Elements (ENCODE) is an ongoing collaborative research project aimed at identifying all the functional elements in the human and mouse genomes. Data generated by the ENCODE consortium are freely accessible at the ENCODE portal (https://www.encodeproject.org/), which is developed and maintained by the ENCODE Data Coordinating Center (DCC). Since the initial portal release in 2013, the ENCODE DCC has updated the portal to make ENCODE data more findable, accessible, interoperable and reusable. Here, we report on recent updates, including new ENCODE data and assays, ENCODE uniform data processing pipelines, new visualization tools, a dataset cart feature, unrestricted public access to ENCODE data on the cloud (Amazon Web Services open data registry, https://registry.opendata.aws/encode-project/) and more comprehensive tutorials and documentation.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Akila, V., Kasi Glory, Pinnamaraju Sanjana Varma, Puchakayala Lakshmi Hemanjili, Tutari Vijaya Lohitha, and T. Sheela. "City Cleanliness Drive Web Portal Using Region Based Convolutional Neural Networks." E3S Web of Conferences 309 (2021): 01127. http://dx.doi.org/10.1051/e3sconf/202130901127.

Повний текст джерела
Анотація:
Identification or detection of object played an important role in Computer Vision, in implementations like city construction process Managers had often wasted lot of their energy, time and resources in cleaning up the garbage, which was unexpectedly showed up. When deep network systems increased its complexity, the systems are constrained by the training data availability. Due to this, Open CV, Google AI released the Open images dataset publicly, so that the research and development would happen in study and analysis of images. As a result, virtual street cleanliness been at most important in this project, however the existing system has disadvantages like collection of garbage is not automated. It doesn’t use the best real time algorithm for identifying the objects. This project will embed the above said things in the system, making the work of managers to keep the city/construction site clean very simple and effective.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Pitilakis, K., Z. Roumelioti, M. Manakou, D. Raptakis, K. Liakakis, A. Anastasiadis, and D. Pitilakis. "The web portal of the EUROSEISTEST strong ground motion database." Bulletin of the Geological Society of Greece 47, no. 3 (December 21, 2016): 1221. http://dx.doi.org/10.12681/bgsg.10978.

Повний текст джерела
Анотація:
Strong motion data that have been recorded during the 20-years of operation of the permanent network of EUROSEISTEST (Mygdonia basin, Northern Greece) have been homogenized and organized in an easily accessible, via the web, database. The EUROSEISTEST web portal and the application server running underneath are based solely on free and open source software (F/OSS; MySQL v5.5; RubyOnRails,SAC, Gnuplot and numerous GNU supporting utilities). Its interface allows the user to easily search strong motion data from approximately 200 events and 26 strong motion stations using event-related, record-related or station-related criteria. Further investigation of the data is possible in a graphical environment which includesplots of processed and unprocessed acceleration waveforms, velocity and displacement time histories, amplitude Fourier and response spectra and spectrograms. A great effort was directed toward the inclusion of accurate and most updated earthquake metadata, as well as a wealth of stations related information such as geotechnicaland geophysical site characterization measurements, subsoil structure and site effects. Acceleration data can be easily downloaded in either SAC or ASCII format, while all stations metadata are also available to download.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Sutherland, Jeffrey J., James L. Stevens, Kamin Johnson, Navin Elango, Yue W. Webster, Bradley J. Mills, and Daniel H. Robertson. "A Novel Open Access Web Portal for Integrating Mechanistic and Toxicogenomic Study Results." Toxicological Sciences 170, no. 2 (April 24, 2019): 296–309. http://dx.doi.org/10.1093/toxsci/kfz101.

Повний текст джерела
Анотація:
Abstract Applying toxicogenomics to improving the safety profile of drug candidates and crop protection molecules is most useful when it identifies relevant biological and mechanistic information that highlights risks and informs risk mitigation strategies. Pathway-based approaches, such as gene set enrichment analysis, integrate toxicogenomic data with known biological process and pathways. Network methods help define unknown biological processes and offer data reduction advantages. Integrating the 2 approaches would improve interpretation of toxicogenomic information. Barriers to the routine application of these methods in genome-wide transcriptomic studies include a need for “hands-on” computer programming experience, the selection of 1 or more analysis methods (eg pathway analysis methods), the sensitivity of results to algorithm parameters, and challenges in linking differential gene expression to variation in safety outcomes. To facilitate adoption and reproducibility of gene expression analysis in safety studies, we have developed Collaborative Toxicogeomics, an open-access integrated web portal using the Django web framework. The software, developed with the Python programming language, is modular, extensible and implements “best-practice” methods in computational biology. New study results are compared with over 4000 rodent liver experiments from Drug Matrix and open TG-GATEs. A unique feature of the software is the ability to integrate clinical chemistry and histopathology-derived outcomes with results from gene expression studies, leading to relevant mechanistic conclusions. We describe its application by analyzing the effects of several toxicants on liver gene expression and exemplify application to predicting toxicity study outcomes upon chronic treatment from expression changes in acute-duration studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Cooper, Drew, Tebbe Ubben, Christine Knoll, Hanne Ballhausen, Shane O'Donnell, Katarina Braune, and Dana Lewis. "Open-source Web Portal for Managing Self-reported Data and Real-world Data Donation in Diabetes Research: Platform Feasibility Study." JMIR Diabetes 7, no. 1 (March 31, 2022): e33213. http://dx.doi.org/10.2196/33213.

Повний текст джерела
Анотація:
Background People with diabetes and their support networks have developed open-source automated insulin delivery systems to help manage their diabetes therapy, as well as to improve their quality of life and glycemic outcomes. Under the hashtag #WeAreNotWaiting, a wealth of knowledge and real-world data have been generated by users of these systems but have been left largely untapped by research; opportunities for such multimodal studies remain open. Objective We aimed to evaluate the feasibility of several aspects of open-source automated insulin delivery systems including challenges related to data management and security across multiple disparate web-based platforms and challenges related to implementing follow-up studies. Methods We developed a mixed methods study to collect questionnaire responses and anonymized diabetes data donated by participants—which included adults and children with diabetes and their partners or caregivers recruited through multiple diabetes online communities. We managed both front-end participant interactions and back-end data management with our web portal (called the Gateway). Participant questionnaire data from electronic data capture (REDCap) and personal device data aggregation (Open Humans) platforms were pseudonymously and securely linked and stored within a custom-built database that used both open-source and commercial software. Participants were later given the option to include their health care providers in the study to validate their questionnaire responses; the database architecture was designed specifically with this kind of extensibility in mind. Results Of 1052 visitors to the study landing page, 930 participated and completed at least one questionnaire. After the implementation of health care professional validation of self-reported clinical outcomes to the study, an additional 164 individuals visited the landing page, with 142 completing at least one questionnaire. Of the optional study elements, 7 participant–health care professional dyads participated in the survey, and 97 participants who completed the survey donated their anonymized medical device data. Conclusions The platform was accessible to participants while maintaining compliance with data regulations. The Gateway formalized a system of automated data matching between multiple data sets, which was a major benefit to researchers. Scalability of the platform was demonstrated with the later addition of self-reported data validation. This study demonstrated the feasibility of custom software solutions in addressing complex study designs. The Gateway portal code has been made available open-source and can be leveraged by other research groups.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rajabi, Enayat, and Wolfgang Greller. "Exposing Social Data as Linked Data in Education." International Journal on Semantic Web and Information Systems 15, no. 2 (April 2019): 92–106. http://dx.doi.org/10.4018/ijswis.2019040105.

Повний текст джерела
Анотація:
According to recent studies, the social interactions of users such as sharing, rating, and reviewing can improve the value of digital learning objects and resources on the web. Linked data techniques, on the other hand, make different kinds of data available and reusable for other applications on the web. Exposing (meta)data, especially with a complex structure, as resource description framework (RDF) requires an ontology to bring all the data types under one umbrella. In this article, the authors propose an ontology in which social activities of users are exposed as linked data by reusing existing vocabularies. The proposed ontology has been implemented in a federated open educational resources (OER) portal, in which they published ratings, shares, comments, and other social activities assigned to around 1,000 OERs. This exposure allows other datasets, including harvested repositories, to explore the exposed social data related to e-learning objects according to the users' social engagement.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Kshetri, T. B., A. Chaksan, and S. Sharma. "THE ROLE OF OPEN-SOURCE PYTHON PACKAGE GEOSERVER-REST IN WEB-GIS DEVELOPMENT." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-4/W2-2021 (August 19, 2021): 91–96. http://dx.doi.org/10.5194/isprs-archives-xlvi-4-w2-2021-91-2021.

Повний текст джерела
Анотація:
Abstract. This work describes geoserver-rest, an open-source python package that can manage the geospatial data in a geoserver which is helpful for uploading, editing, and deleting the raster/vector layers from various sources. It is also useful for generating the style/legend from the uploaded geospatial data. Thus, generated legend can be used for visualization of maps in the web-GIS platform. The package is successfully used to build the web-GIS portal for agricultural datasets of Afghanistan, which has around 6000 map layers. The main benefit that geoserver-rest provides to this project is the ability to upload the data to the geoserver and create the styles file dynamically. Thus, created style file are directly linked to the corresponding layer and provide the Web Mapping Service (WMS) standard and visualize in an interactive way.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Chandra, Sushil, Rajeev Sonkar, Ujjwal Yadav, Pragati Srivastava, and Sanghmitra Sanghmitra. "Design and Develop the B2C Farming Information System Solution using Mobile and Web Geospatial Development Technologies for Smart Supply Chain of Fruit Crops of UP, India." International Journal of Computer Science and Mobile Applications 10, no. 1 (January 30, 2022): 1–17. http://dx.doi.org/10.47760/ijcsma.2022.v10i01.001.

Повний текст джерела
Анотація:
The main objective of writing this paper is to use innovative blend of open source geospatial and software development technologies, for maximizing benefits to both exporter/buyer and grower/farmer, by establishing smart supply chain of the fruit crops of UP, India The striving endeavour drone to build and/or utilize develop GIS, web GIS Portal, and Mobile Application by synergizing computer . In this, this anomalous problem can be solved by creating a web GIS portal and mobile application by coordinating computer science, GIS, remote sensing and mobile technology. After analyzing the satellite remote sensing data, the information of various Orchards are extracted and converted to GIS feature geodatabase. The format is converted and imported into the mobile application, with the help of Geographical Coordinates, the fruit related information of their field be filled by grower/farmer through handy user-friendly vernacular Mobile Application. The submitted information pulled on created user-friendly GIS Portal. After filling it, the web will be updated automatically on the GIS portal. With the help of latitude and longitude, all the information of crop information of the farmer such as which variety of fruit is available in the field and how much quantity can be filled in the mobile application. After filling it, the web will be updated automatically on the GIS portal. All buyer's list will already be available on the web GIS portal, this will end the problems of both the farmer and the buyer.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wicaksono, Mochamad Fajar, and Myrna Dwi Rahmatya. "IoT BASED HOUSING AREA PORTAL WITH NODEMCU, WEB AND ANDROID APPLICATIONS." IJNMT (International Journal of New Media Technology) 8, no. 1 (June 27, 2021): 10–15. http://dx.doi.org/10.31937/ijnmt.v8i1.1731.

Повний текст джерела
Анотація:
The access time of using the portal in certain blocks in a residential area can be a problem for some residents. Another problem that arises is if the officer holding the portal key is not in place. The purpose of this study is to create a system to regulate access rights to a particular block within a residential area so that the opening and closing of the portal can be done at any time by residents in the intended area. There are several blocks of this system, namely the NodeMCU controller block, ESP32CAM, Android applications, and web applications that are built using the PHP and MySQL programming languages. NodeMCU is used as the main controller to manage servo motors, send and receive data to and from the server, receive input related to open and close portals from the android application. The web application is used to register users, view the portal usage log, and verify the login process of the application. This system has been running well based on the results of tests that have been carried out, where the registration process, login, opening and closing portals, log usage is in accordance with the objectives to be achieved. Index Terms—Portal; NodeMCU; ESP32CAM; Android; Web Application
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zagidullin, Bulat, Jehad Aldahdooh, Shuyu Zheng, Wenyu Wang, Yinyin Wang, Joseph Saad, Alina Malyutina, et al. "DrugComb: an integrative cancer drug combination data portal." Nucleic Acids Research 47, W1 (May 8, 2019): W43—W51. http://dx.doi.org/10.1093/nar/gkz337.

Повний текст джерела
Анотація:
AbstractDrug combination therapy has the potential to enhance efficacy, reduce dose-dependent toxicity and prevent the emergence of drug resistance. However, discovery of synergistic and effective drug combinations has been a laborious and often serendipitous process. In recent years, identification of combination therapies has been accelerated due to the advances in high-throughput drug screening, but informatics approaches for systems-level data management and analysis are needed. To contribute toward this goal, we created an open-access data portal called DrugComb (https://drugcomb.fimm.fi) where the results of drug combination screening studies are accumulated, standardized and harmonized. Through the data portal, we provided a web server to analyze and visualize users’ own drug combination screening data. The users can also effectively participate a crowdsourcing data curation effect by depositing their data at DrugComb. To initiate the data repository, we collected 437 932 drug combinations tested on a variety of cancer cell lines. We showed that linear regression approaches, when considering chemical fingerprints as predictors, have the potential to achieve high accuracy of predicting the sensitivity of drug combinations. All the data and informatics tools are freely available in DrugComb to enable a more efficient utilization of data resources for future drug combination discovery.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Gemmell, A. L., R. M. Barciela, J. D. Blower, K. Haines, Q. Harpham, K. Millard, M. R. Price, and A. Saulter. "An ECOOP web portal for visualising and comparing distributed coastal oceanography model and in situ data." Ocean Science 7, no. 4 (July 4, 2011): 445–54. http://dx.doi.org/10.5194/os-7-445-2011.

Повний текст джерела
Анотація:
Abstract. As part of a large European coastal operational oceanography project (ECOOP), we have developed a web portal for the display and comparison of model and in situ marine data. The distributed model and in situ datasets are accessed via an Open Geospatial Consortium Web Map Service (WMS) and Web Feature Service (WFS) respectively. These services were developed independently and readily integrated for the purposes of the ECOOP project, illustrating the ease of interoperability resulting from adherence to international standards. The key feature of the portal is the ability to display co-plotted timeseries of the in situ and model data and the quantification of misfits between the two. By using standards-based web technology we allow the user to quickly and easily explore over twenty model data feeds and compare these with dozens of in situ data feeds without being concerned with the low level details of differing file formats or the physical location of the data. Scientific and operational benefits to this work include model validation, quality control of observations, data assimilation and decision support in near real time. In these areas it is essential to be able to bring different data streams together from often disparate locations.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Gemmell, A. L., R. M. Barciela, J. D. Blower, K. Haines, Q. Harpham, K. Millard, M. R. Price, and A. Saulter. "An ECOOP web portal for visualising and comparing distributed coastal oceanography model and in-situ data." Ocean Science Discussions 8, no. 1 (January 28, 2011): 189–218. http://dx.doi.org/10.5194/osd-8-189-2011.

Повний текст джерела
Анотація:
Abstract. As part of a large European coastal operational oceanography project (ECOOP), we have developed a web portal for the display and comparison of model and in-situ marine data. The distributed model and in-situ datasets are accessed via an Open Geospatial Consortium Web Map Service (WMS) and Web Feature Service (WFS) respectively. These services were developed independently and readily integrated for the purposes of the ECOOP project, illustrating the ease of interoperability resulting from adherence to international standards. The key feature of the portal is the ability to display co-plotted timeseries of the in-situ and model data and the quantification of misfits between the two. By using standards-based web technology we allow the user to quickly and easily explore over twenty model data feeds and compare these with dozens of in-situ data feeds without being concerned with the low level details of differing file formats or the physical location of the data. Scientific and operational benefits to this work include model validation, quality control of observations, data assimilation and decision support in near real time. In these areas it is essential to be able to bring different data streams together from often disparate locations.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Ivanova, Natalya V., and Maxim P. Shashkov. "Biodiversity databases in Russia: towards a national portal." Arctic Science 3, no. 3 (September 1, 2017): 560–76. http://dx.doi.org/10.1139/as-2016-0050.

Повний текст джерела
Анотація:
Russia holds massive biodiversity data accumulated in botanical and zoological collections, literature publications, annual reports of natural reserves, nature conservation, and monitoring study project reports. While some data have been digitized and organized in databases or spreadsheets, most of the biodiversity data in Russia remain dormant and digitally inaccessible. Concepts of open access to research data is spreading, and the lack of data publishing tradition and of use of data standards remain prominent. A national biodiversity information system is lacking and most of the biodiversity data are not available or the available data are not consolidated. As a result, Russian biodiversity data remain fragmented and inaccessible for researchers. The majority of Russian biodiversity databases do not have web interfaces and are accessible only to a limited numbers of researchers. The main reason for lack of access to these resources relates to the fact that the databases have previously been developed only as a local resource. In addition, many sources have previously been developed in the desktop database environments mainly using MS Access and, in some cases, earlier DBMS for DOS, i.e., file-server system, which does not have the functionality to create access to records through a web interface. Among the databases with a web interface, a few information systems have interactive maps with the species occurrence data and systems allowing registered users to upload data. It is important to note that the conceptual structures of these databases were created without taking into account modern standards of the Darwin Core; furthermore, some data sources were developed prior to the first work version of the Darwin Core release in 2001. Despite the complexity and size of the biodiversity data landscape in Russia, the interest in publishing data through international biodiversity portals is increasing among Russian researchers. Since 2014, institutional data publishers in Russia have published about 140 000 species occurrences through gbif.org. The increase in data publishing activity calls for the creation of a GBIF node in Russia, aiming to support Russian biodiversity experts in international data work.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Dolder, Danisa, Gustavious P. Williams, A. Woodruff Miller, Everett James Nelson, Norman L. Jones, and Daniel P. Ames. "Introducing an Open-Source Regional Water Quality Data Viewer Tool to Support Research Data Access." Hydrology 8, no. 2 (June 10, 2021): 91. http://dx.doi.org/10.3390/hydrology8020091.

Повний текст джерела
Анотація:
Water quality data collection, storage, and access is a difficult task and significant work has gone into methods to store and disseminate these data. We present a tool to disseminate research in a simple method that does not replace but extends and leverages these tools. The tool is not geo-graphically limited and works with any spatially-referenced data. In most regions, government agencies maintain central repositories for water quality data. In the United States, the federal government maintains two systems to fill that role for hydrological data: the U.S. Geological Survey (USGS) National Water Information System (NWIS) and the U.S. Environmental Protection Agency (EPA) Storage and Retrieval System (STORET), since superseded by the Water Quality Portal (WQP). The Consortium of the Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) has developed the Hydrologic Information System (HIS) to standardize the search and discovery of these data as well as other observational time series datasets. Additionally, CUAHSI developed and maintains HydroShare.org (5 May 2021) as a web portal for researchers to store and share hydrology data in a variety of formats including spatial geographic information system data. We present the Tethys Platform based Water Quality Data Viewer (WQDV) web application that uses these systems to provide researchers and local monitoring organizations with a simple method to archive, view, analyze, and distribute water quality data. WQDV provides an archive for non-official or preliminary research data and access to those data that have been collected but need to be distributed prior to review or inclusion in the state database. WQDV can also accept subsets of data downloaded from other sources, such as the EPA WQP. WQDV helps users understand what local data are available and how they relate to the data in larger databases. WQDV presents data in spatial (maps) and temporal (time series graphs) forms to help the users analyze and potentially screen the data sources before export for additional analysis. WQDV provides a convenient method for interim data to be widely disseminated and easily accessible in the context of a subset of official data. We present WQDV using a case study of data from Utah Lake, Utah, United States of America.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Li, Gang, Changfei Luan, Yanhan Dong, Yifang Xie, Scott C. Zentz, Rob Zelt, Jeff Roach, et al. "ExpressHeart: Web Portal to Visualize Transcriptome Profiles of Non-Cardiomyocyte Cells." International Journal of Molecular Sciences 22, no. 16 (August 19, 2021): 8943. http://dx.doi.org/10.3390/ijms22168943.

Повний текст джерела
Анотація:
Unveiling the molecular features in the heart is essential for the study of heart diseases. Non-cardiomyocytes (nonCMs) play critical roles in providing structural and mechanical support to the working myocardium. There is an increasing amount of single-cell RNA-sequencing (scRNA-seq) data characterizing the transcriptomic profiles of nonCM cells. However, no tool allows researchers to easily access the information. Thus, in this study, we develop an open-access web portal, ExpressHeart, to visualize scRNA-seq data of nonCMs from five laboratories encompassing three species. ExpressHeart enables comprehensive visualization of major cell types and subtypes in each study; visualizes gene expression in each cell type/subtype in various ways; and facilitates identifying cell-type-specific and species-specific marker genes. ExpressHeart also provides an interface to directly combine information across datasets, for example, generating lists of high confidence DEGs by taking the intersection across different datasets. Moreover, ExpressHeart performs comparisons across datasets. We show that some homolog genes (e.g., Mmp14 in mice and mmp14b in zebrafish) are expressed in different cell types between mice and zebrafish, suggesting different functions across species. We expect ExpressHeart to serve as a valuable portal for investigators, shedding light on the roles of genes on heart development in nonCM cells.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Bill, R., A. Lorenzen-Zabel, M. Hinz, J. Kalcher, A. Pfeiffer, A. Brosowski, H. Aberle, et al. "OPENGEOEDU – A MASSIVE OPEN ONLINE COURSE ON USING OPEN GEODATA." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-5-2020 (August 3, 2020): 31–38. http://dx.doi.org/10.5194/isprs-annals-v-5-2020-31-2020.

Повний текст джерела
Анотація:
Abstract. This article presents the concept, implementation and results of the project "OpenGeoEdu", an open and web-based educational resource on Remote Sensing and GIS. OpenGeoEdu is focused on the use of open geodata in spatially oriented study courses. Teachers and students in the German-speaking countries are to be offered an open learning environment hoping to increase the motivation of students and researchers by dealing with current societal relevant issues. OpenGeoEdu is available at www.opengeoedu.de, has been offered as a MOOC since October 2018 and is being continuously expanded and developed. In addition, an umbrella portal of the portals on open geodata is available to quickly get an overview of the data offered. Four partners from universities, non-university research institutions as well as federal research authorities with R&amp;D tasks are collaborating in this project offering case studies for teaching and education based on their experiences in a wide range of spatial applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Muchini, Ronald, Webster Gumindoga, Sydney Togarepi, Tarirai Pinias Masarira, and Timothy Dube. "Near real time water quality monitoring of Chivero and Manyame lakes of Zimbabwe." Proceedings of the International Association of Hydrological Sciences 378 (May 29, 2018): 85–92. http://dx.doi.org/10.5194/piahs-378-85-2018.

Повний текст джерела
Анотація:
Abstract. Zimbabwe's water resources are under pressure from both point and non-point sources of pollution hence the need for regular and synoptic assessment. In-situ and laboratory based methods of water quality monitoring are point based and do not provide a synoptic coverage of the lakes. This paper presents novel methods for retrieving water quality parameters in Chivero and Manyame lakes, Zimbabwe, from remotely sensed imagery. Remotely sensed derived water quality parameters are further validated using in-situ data. It also presents an application for automated retrieval of those parameters developed in VB6, as well as a web portal for disseminating the water quality information to relevant stakeholders. The web portal is developed, using Geoserver, open layers and HTML. Results show the spatial variation of water quality and an automated remote sensing and GIS system with a web front end to disseminate water quality information.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Artese, Maria Teresa, and Isabella Gagliardi. "Integrating, Indexing and Querying the Tangible and Intangible Cultural Heritage Available Online: The QueryLab Portal." Information 13, no. 5 (May 19, 2022): 260. http://dx.doi.org/10.3390/info13050260.

Повний текст джерела
Анотація:
Cultural heritage inventories have been created to collect and preserve the culture and to allow the participation of stakeholders and communities, promoting and disseminating their knowledges. There are two types of inventories: those who give data access via web services or open data, and others which are closed to external access and can be visited only through dedicated web sites, generating data silo problems. The integration of data harvested from different archives enables to compare the cultures and traditions of places from opposite sides of the world, showing how people have more in common than expected. The purpose of the developed portal is to provide query tools managing the web services provided by cultural heritage databases in a transparent way, allowing the user to make a single query and obtain results from all inventories considered at the same time. Moreover, with the introduction of the ICH-Light model, specifically studied for the mapping of intangible heritage, data from inventories of this domain can also be harvested, indexed and integrated into the portal, allowing the creation of an environment dedicated to intangible data where traditions, knowledges, rituals and festive events can be found and searched all together.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

David, Fabrice P. A., Maria Litovchenko, Bart Deplancke, and Vincent Gardeux. "ASAP 2020 update: an open, scalable and interactive web-based portal for (single-cell) omics analyses." Nucleic Acids Research 48, W1 (May 25, 2020): W403—W414. http://dx.doi.org/10.1093/nar/gkaa412.

Повний текст джерела
Анотація:
Abstract Single-cell omics enables researchers to dissect biological systems at a resolution that was unthinkable just 10 years ago. However, this analytical revolution also triggered new demands in ‘big data’ management, forcing researchers to stay up to speed with increasingly complex analytical processes and rapidly evolving methods. To render these processes and approaches more accessible, we developed the web-based, collaborative portal ASAP (Automated Single-cell Analysis Portal). Our primary goal is thereby to democratize single-cell omics data analyses (scRNA-seq and more recently scATAC-seq). By taking advantage of a Docker system to enhance reproducibility, and novel bioinformatics approaches that were recently developed for improving scalability, ASAP meets challenging requirements set by recent cell atlasing efforts such as the Human (HCA) and Fly (FCA) Cell Atlas Projects. Specifically, ASAP can now handle datasets containing millions of cells, integrating intuitive tools that allow researchers to collaborate on the same project synchronously. ASAP tools are versioned, and researchers can create unique access IDs for storing complete analyses that can be reproduced or completed by others. Finally, ASAP does not require any installation and provides a full and modular single-cell RNA-seq analysis pipeline. ASAP is freely available at https://asap.epfl.ch.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Rantala, Heikki, Ilkka Jokipii, Esko Ikkala, and Eero Hyvönen. "WarVictimSampo 1914–1922: A National War Memorial on the Semantic Web for Digital Humanities Research and Applications." Journal on Computing and Cultural Heritage 15, no. 1 (February 28, 2022): 1–18. http://dx.doi.org/10.1145/3477606.

Повний текст джерела
Анотація:
This article presents the semantic portal and Linked Open Data service WarVictimSampo 1914–1922 about the war victims, battles, and prisoner camps in the Finnish Civil and other wars in 1914–1922. The system is based on a database of the National Archives of Finland and additional related data created, compiled, and linked during the project. The system contains detailed information about some 40,000 deaths extracted from several data sources and data about over 1,000 battles of the Civil War. A key novelty of WarVictimSampo 1914–1922 is the integration of ready-to-use Digital Humanities visualizations and data analysis tooling with semantic faceted search and data exploration, which allows, e.g., studying data about wider prosopographical groups in addition to individual war victims. The article focuses on demonstrating how the tools of the portal, as well as the underlying SPARQL endpoint openly available on the Web, can be used to explore and analyze war history in flexible and visual ways. WarVictimSampo 1914–1922 is a new member in the series of “Sampo” model-based semantic portals. The portal is in use and has had 23,000 users, including both war historians and the general public seeking information about their deceased relatives.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Andrade, Morgana Carneiro, Rafaela Oliveira da Cunha, Jorge Figueiredo, and Ana Alice Baptista. "Do the European Data Portal Datasets in the Categories Government and Public Sector, Transport and Education, Culture and Sport Meet the Data on the Web Best Practices?" Data 6, no. 8 (August 19, 2021): 94. http://dx.doi.org/10.3390/data6080094.

Повний текст джерела
Анотація:
The European Data Portal is one of the worldwide initiatives that aggregates and make open data available. This is a case study with a qualitative approach that aims to determine to what extent the datasets from the Government and Public Sector, Transport, and Education, Culture and Sport categories published on the portal meet the Data on the Web Best Practices (W3C). With the datasets sorted by last modified and filtered by the ratings Excellent and Good+, we analyzed 50 different datasets from each category. The analysis revealed that the Government and Transport categories have the best-rated datasets, followed by Transportation and, lastly, Education. This analysis revealed that the Government and Transport categories have the best-rated datasets and Education the least. The most observed BPs were: BP1, BP2, BP4, BP5, BP10, BP11, BP12, BP13C, BP16, BP17, BP19, BP29, and BP34, while the least observed were: BP3, BP7H, BP7C, BP13H, BP14, BP15, BP21, BP32, and BP35. These results fill a gap in the literature on the quality of the data made available by this portal and provide insights for European data managers on which best practices are most observed and which ones need more attention.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Iosifescu Enescu, Ionuț, Lucia de Espona, Dominik Haas-Artho, Rebecca Kurup Buchholz, David Hanimann, Marius Rüetschi, Dirk Nikolaus Karger, et al. "Cloud Optimized Raster Encoding (CORE): A Web-Native Streamable Format for Large Environmental Time Series." Geomatics 1, no. 3 (August 18, 2021): 369–82. http://dx.doi.org/10.3390/geomatics1030021.

Повний текст джерела
Анотація:
The Environmental Data Portal EnviDat aims to fuse data publication repository functionalities with next-generation web-based environmental geospatial information systems (web-EGIS) and Earth Observation (EO) data cube functionalities. User requirements related to mapping and visualization represent a major challenge for current environmental data portals. The new Cloud Optimized Raster Encoding (CORE) format enables an efficient storage and management of gridded data by applying video encoding algorithms. Inspired by the cloud optimized GeoTIFF (COG) format, the design of CORE is based on the same principles that enable efficient workflows on the cloud, addressing web-EGIS visualization challenges for large environmental time series in geosciences. CORE is a web-native streamable format that can compactly contain raster imagery as a data hypercube. It enables simultaneous exchange, preservation, and fast visualization of time series raster data in environmental repositories. The CORE format specifications are open source and can be used by other platforms to manage and visualize large environmental time series.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Bittrich, Sebastian, Yana Rose, Joan Segura, Robert Lowe, John D. Westbrook, Jose M. Duarte, and Stephen K. Burley. "RCSB Protein Data Bank: improved annotation, search and visualization of membrane protein structures archived in the PDB." Bioinformatics 38, no. 5 (December 2, 2021): 1452–54. http://dx.doi.org/10.1093/bioinformatics/btab813.

Повний текст джерела
Анотація:
Abstract Motivation Membrane proteins are encoded by approximately one fifth of human genes but account for more than half of all US FDA approved drug targets. Thanks to new technological advances, the number of membrane proteins archived in the PDB is growing rapidly. However, automatic identification of membrane proteins or inference of membrane location is not a trivial task. Results We present recent improvements to the RCSB Protein Data Bank web portal (RCSB PDB, rcsb.org) that provide a wealth of new membrane protein annotations integrated from four external resources: OPM, PDBTM, MemProtMD and mpstruc. We have substantially enhanced the presentation of data on membrane proteins. The number of membrane proteins with annotations available on rcsb.org was increased by ∼80%. Users can search for these annotations, explore corresponding tree hierarchies, display membrane segments at the 1D amino acid sequence level, and visualize the predicted location of the membrane layer in 3D. Availability and implementation Annotations, search, tree data and visualization are available at our rcsb.org web portal. Membrane visualization is supported by the open-source Mol* viewer (molstar.org and github.com/molstar/molstar). Supplementary information Supplementary data are available at Bioinformatics online.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Altomare, Angela, Nicola Corriero, Corrado Cuocci, Aurelia Falcicchio, Anna Moliterni, and Rosanna Rizzi. "OChemDb: the free online Open Chemistry Database portal for searching and analysing crystal structure information." Journal of Applied Crystallography 51, no. 4 (July 13, 2018): 1229–36. http://dx.doi.org/10.1107/s1600576718008166.

Повний текст джерела
Анотація:
The Open Chemistry Database (OChemDb) is a new free online portal which uses an appropriately designed database of already solved crystal structures. It makes freely available computational and graphical tools for searching and analysing crystal-chemical information of organic, metal–organic and inorganic structures, and providing statistics on desired bond distances, bond angles, torsion angles and space groups. Atom types have been classified by an identifier code containing information about the chemical topology and local environment. The crystallographic data used by OChemDb are acquired from the CIFs contained in the free small-molecule Crystallography Open Database (COD). OChemDb offers easy-to-use and intuitive options for searching. It is updated by following the continuous growth of information stored in the COD. It can be of great utility for structural chemistry, in particular in the process of determination of a new crystal structure, and for any discipline involving crystalline structure knowledge. The use of OChemDb requires only a web browser and an internet connection. Every device (mobile or desktop) and every operating system is able to use OChemDb by accessing its web page. Examples of application of OChemDb are reported.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Ghoussaini, Maya, Edward Mountjoy, Miguel Carmona, Gareth Peat, Ellen M. Schmidt, Andrew Hercules, Luca Fumis, et al. "Open Targets Genetics: systematic identification of trait-associated genes using large-scale genetics and functional genomics." Nucleic Acids Research 49, no. D1 (October 12, 2020): D1311—D1320. http://dx.doi.org/10.1093/nar/gkaa840.

Повний текст джерела
Анотація:
Abstract Open Targets Genetics (https://genetics.opentargets.org) is an open-access integrative resource that aggregates human GWAS and functional genomics data including gene expression, protein abundance, chromatin interaction and conformation data from a wide range of cell types and tissues to make robust connections between GWAS-associated loci, variants and likely causal genes. This enables systematic identification and prioritisation of likely causal variants and genes across all published trait-associated loci. In this paper, we describe the public resources we aggregate, the technology and analyses we use, and the functionality that the portal offers. Open Targets Genetics can be searched by variant, gene or study/phenotype. It offers tools that enable users to prioritise causal variants and genes at disease-associated loci and access systematic cross-disease and disease-molecular trait colocalization analysis across 92 cell types and tissues including the eQTL Catalogue. Data visualizations such as Manhattan-like plots, regional plots, credible sets overlap between studies and PheWAS plots enable users to explore GWAS signals in depth. The integrated data is made available through the web portal, for bulk download and via a GraphQL API, and the software is open source. Applications of this integrated data include identification of novel targets for drug discovery and drug repurposing.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Maso, Joan, Alaitz Zabala, Ivette Serral, and Xavier Pons. "A Portal Offering Standard Visualization and Analysis on top of an Open Data Cube for Sub-National Regions: The Catalan Data Cube Example." Data 4, no. 3 (July 10, 2019): 96. http://dx.doi.org/10.3390/data4030096.

Повний текст джерела
Анотація:
The amount of data that Sentinel fleet is generating over a territory such as Catalonia makes it virtually impossible to manually download and organize as files. The Open Data Cube (ODC) offers a solution for storing big data products in an efficient way with a modest hardware and avoiding cloud expenses. The approach will still be useful up to the next decade. Yet, ODC requires a level of expertise that most people who could benefit from the information do not have. This paper presents a web map browser that gives access to the data and goes beyond a simple visualization by combining the OGC WMS standard with modern web browser capabilities to incorporate time series analytics. This paper shows how we have applied this tool to analyze the spatial distribution of the availability of Sentinel 2 data over Catalonia and revealing differences in the number of useful scenes depending on the geographical area that ranges from one or two images per month to more than one image per week. The paper also demonstrates the usefulness of the same approach in giving access to remote sensing information to a set of protected areas around Europe participating in the H2020 ECOPotential project.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Shemyakin, A. S., N. A. Kashulin, and O. V. Petrova. "Web-GIS technologies usage in eco monitoring tasks on example Murmansk region lakes pollution." Geoinformatika, no. 3 (2020): 2–9. http://dx.doi.org/10.47148/1609-364x-2020-3-2-9.

Повний текст джерела
Анотація:
When carrying out long-term environmental monitoring programs, a whole series of problems arise related to storage, processing, visualization of data obtained, their presentation in a form suitable for making managerial decisions and / or public perception. The received volume of primary information requires structured storage, a special multivariate analysis to identify long-term trends and assess dangers of possible negative phenomena, a different level of accessibility and generalization of results for various information consumers. At the same time, copyrights of the information owners must be respected. As a solution to these problems, it is proposed to use GIS-oriented information systems for storing and processing the accumulated information. The paper considers the main aspects of creating a GIS portal for publishing results of hydrochemical studies of water objects in the Murmansk region. A review of existing developments and approaches to solving this problem is given. A technique for creating a GIS portal based on open source software is proposed. Issue of implementing copyright protection mechanisms for owners of hydrochemical information considered partially. Currently, a pilot version of the GIS portal has been developed, which stores the web version of the map «Hydrochemical characteristics of the lakes of the Murmansk region». In the future, it is planned to supplement the resource with the blocks «Hydrology», «Bottom sediments of Murmansk region lakes», «Hydrobionts of Murmansk region lakes» and visualization tools for primary data. This work is carried out as part of a project to increase availability of environmental information for both the public and decision-makers. The project may also be of interest to scientific organizations. Keywords: hydrochemical information, GIS, QGIS, GIS portal.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Brumana, Raffaella, Daniela Oreni, Branka Cuca, Anna Rampini, and Monica Pepe. "Open Access to Historical Information for Landscape Analysis in an SDI Framework." International Journal of Agricultural and Environmental Information Systems 4, no. 3 (July 2013): 18–40. http://dx.doi.org/10.4018/ijaeis.2013070102.

Повний текст джерела
Анотація:
The paper illustrates the potentials of geospatial data to access a historical digital atlas for landscape analysis and territorial government. The experience of a historical geo-portal, the “Atl@nte dei Catasti Storici,” in the management of geo-referenced and non-geo-referenced maps—ancient cadastral and topographic maps of the Lombardy Region—can be considered a case study with common aspects to many European regions with an extensive cartographic heritage. The development of downstream Web-based services enables integration with other data sources (current maps, satellite and Unmanned Aerial Vehicle [UAV] airborne photogrammetry, and multi-spectral images and derived products). This provides new scenarios for retrieving geospatial knowledge in support of more sustainable management and governance of the territory.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Burrows, Toby. "Linked Open Data and Medieval Studies: Some Lessons from the Mapping Manuscript Migrations Project." International Journal of Humanities and Arts Computing 16, no. 1 (March 2022): 64–77. http://dx.doi.org/10.3366/ijhac.2022.0277.

Повний текст джерела
Анотація:
Building on the work of the Mapping Manuscript Migrations (MMM) Project between 2017 and 2020, this article aims to identify the potential benefits and likely challenges of using Linked Open Data (LOD) more widely across the research field of medieval studies. As well as aggregating and linking disparate datasets relating to the history of more than 220,000 medieval manuscripts, the MMM Project reconciled and matched vocabularies for places, persons, organizations, works and manuscripts. It built and tested various forms of access to the aggregated data, including a web portal and a SPARQL endpoint. It also demonstrated suitable ways of publishing its outputs, not only the aggregated data but also the data model and ontologies used. Drawing on the lessons learned in the MMM Project, the article offers suggestions for building an LOD environment for Western European medieval studies more broadly, covering the aggregation of heterogeneous data, the reconciliation of disparate vocabularies, and ways of enabling more effective discovery, exploration and analysis across the aggregated data.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Leng, Chew Bee, Kamsiah Mohd Ali, and Ch’ng Eng Hoo. "Open access repositories on open educational resources." Asian Association of Open Universities Journal 11, no. 1 (August 1, 2016): 35–49. http://dx.doi.org/10.1108/aaouj-06-2016-0005.

Повний текст джерела
Анотація:
Purpose Triggered by the advancement of information and communications technology, open access repositories (a variant of digital libraries) is one of the important changes impacting library services. In the context of openness to a wider community to access free resources, Wawasan Open University Library initiated a research project to build open access repositories on open educational resources. Open educational resources (OER) is an area of a multifaceted open movement in education. The purpose of this paper is to show how two web portal repositories on OER materials were developed adopting a Japanese open source software, called WEKO. Design/methodology/approach The design approach is based on a pull to push strategy whereby metadata of scholarly open access materials kept within the institution and network communities’ digital databases were harvested using the Open Archives Initiatives Protocol for Metadata Harvesting method into another open knowledge platform for discovery by other users. Findings Positive results emanating from the university open access repositories development showed how it strengthen the role of the librarian as manager of institutional assets and successfully making the content freely available from this open knowledge platform for reuse in learning and teaching. Research limitations/implications Developing further programmes to encourage, influence faculty members and prospective stakeholders to use and contribute content to the valuable repositories is indeed a challenging task. Originality/value This paper provides insight for academic libraries on how open access repositories development and metadata analysis can enhance new professional challenges for information professionals in the field of data management, data quality and intricacies of supporting data repositories and build new open models of collaboration across institutions and libraries. This paper also describes future collaboration work with institutions in sharing their open access resources.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Kayanda, A. M., and D. Machuve. "A Web-based Data Visualization Tool Regarding School Dropouts and User Asssesment." Engineering, Technology & Applied Science Research 10, no. 4 (August 16, 2020): 5967–73. http://dx.doi.org/10.48084/etasr.3411.

Повний текст джерела
Анотація:
Data visualization is important for understanding the enormous amount of data generated daily. The education domain generates and owns huge amounts of data. Presentation of these data in a way that gives users quick and meaningful insights is very important. One of the biggest challenges in education is school dropouts, which is observed from basic education levels to colleges and universities. This paper presents a web-based data visualization tool for school dropouts in Tanzania targeting primary and secondary schools, together with the users’ feedback regarding the developed tool. We collected data from the United Republic of Tanzania Government Open Data Portal and the President’s Office - Regional Administration and Local Government (PO-RALG). Python was then used to preprocess the data, and finally, with JavaScript, a web-based tool was developed for data visualization. User acceptance testing was conducted and the majority agreed that data visualization is very helpful for quickly understanding data, reporting, and decision making. It was also noted that the developed tool could be useful not only in the education domain but it could also be adopted by other departments and organizations of the government.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Cetl, V., T. Kliment, and M. Kliment. "BORDERLESS GEOSPATIAL WEB (BOLEGWEB)." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B4 (June 14, 2016): 677–82. http://dx.doi.org/10.5194/isprs-archives-xli-b4-677-2016.

Повний текст джерела
Анотація:
The effective access and use of geospatial information (GI) resources acquires a critical value of importance in modern knowledge based society. Standard web services defined by Open Geospatial Consortium (OGC) are frequently used within the implementations of spatial data infrastructures (SDIs) to facilitate discovery and use of geospatial data. This data is stored in databases located in a layer, called the invisible web, thus are ignored by search engines. SDI uses a catalogue (discovery) service for the web as a gateway to the GI world through the metadata defined by ISO standards, which are structurally diverse to OGC metadata. Therefore, a crosswalk needs to be implemented to bridge the OGC resources discovered on mainstream web with those documented by metadata in an SDI to enrich its information extent. A public global wide and user friendly portal of OGC resources available on the web ensures and enhances the use of GI within a multidisciplinary context and bridges the geospatial web from the end-user perspective, thus opens its borders to everybody. <br><br> Project “Crosswalking the layers of geospatial information resources to enable a borderless geospatial web” with the acronym BOLEGWEB is ongoing as a postdoctoral research project at the Faculty of Geodesy, University of Zagreb in Croatia (http://bolegweb.geof.unizg.hr/). The research leading to the results of the project has received funding from the European Union Seventh Framework Programme (FP7 2007-2013) under Marie Curie FP7-PEOPLE-2011-COFUND. The project started in the November 2014 and is planned to be finished by the end of 2016. This paper provides an overview of the project, research questions and methodology, so far achieved results and future steps.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Menegon, Stefano, Alessandro Sarretta, Daniel Depellegrin, Giulio Farella, Chiara Venier, and Andrea Barbanti. "Tools4MSP: an open source software package to support Maritime Spatial Planning." PeerJ Computer Science 4 (October 1, 2018): e165. http://dx.doi.org/10.7717/peerj-cs.165.

Повний текст джерела
Анотація:
This paper presents the Tools4MSP software package, a Python-based Free and Open Source Software (FOSS) for geospatial analysis in support of Maritime Spatial Planning (MSP) and marine environmental management. The suite was initially developed within the ADRIPLAN data portal, that has been recently upgraded into the Tools4MSP Geoplatform (data.tools4msp.eu), an integrated web platform that supports MSP through the application of different tools, e.g., collaborative geospatial modelling of cumulative effects assessment (CEA) and marine use conflict (MUC) analysis. The package can be used as stand-alone library or as collaborative webtool, providing user-friendly interfaces appropriate to decision-makers, regional authorities, academics and MSP stakeholders. An effective MSP-oriented integrated system of web-based software, users and services is proposed. It includes four components: the Tools4MSP Geoplatform for interoperable and collaborative sharing of geospatial datasets and for MSP-oriented analysis, the Tools4MSP package as stand-alone library for advanced geospatial and statistical analysis, the desktop applications to simplify data curation and the third party data repositories for multidisciplinary and multilevel geospatial datasets integration. The paper presents an application example of the Tools4MSP GeoNode plugin and an example of Tools4MSP stand-alone library for CEA in the Adriatic Sea. The Tools4MSP and the developed software have been released as FOSS under the GPL 3 license and are currently under further development.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Testi, Debora, Paolo Quadrani, and Marco Viceconti. "PhysiomeSpace: digital library service for biomedical data." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 368, no. 1921 (June 28, 2010): 2853–61. http://dx.doi.org/10.1098/rsta.2010.0023.

Повний текст джерела
Анотація:
Every research laboratory has a wealth of biomedical data locked up, which, if shared with other experts, could dramatically improve biomedical and healthcare research. With the PhysiomeSpace service, it is now possible with a few clicks to share with selected users biomedical data in an easy, controlled and safe way. The digital library service is managed using a client–server approach. The client application is used to import, fuse and enrich the data information according to the PhysiomeSpace resource ontology and upload/download the data to the library. The server services are hosted on the Biomed Town community portal, where through a web interface, the user can complete the metadata curation and share and/or publish the data resources. A search service capitalizes on the domain ontology and on the enrichment of metadata for each resource, providing a powerful discovery environment. Once the users have found the data resources they are interested in, they can add them to their basket, following a metaphor popular in e-commerce web sites. When all the necessary resources have been selected, the user can download the basket contents into the client application. The digital library service is now in beta and open to the biomedical research community.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Cetl, V., T. Kliment, and M. Kliment. "BORDERLESS GEOSPATIAL WEB (BOLEGWEB)." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B4 (June 14, 2016): 677–82. http://dx.doi.org/10.5194/isprsarchives-xli-b4-677-2016.

Повний текст джерела
Анотація:
The effective access and use of geospatial information (GI) resources acquires a critical value of importance in modern knowledge based society. Standard web services defined by Open Geospatial Consortium (OGC) are frequently used within the implementations of spatial data infrastructures (SDIs) to facilitate discovery and use of geospatial data. This data is stored in databases located in a layer, called the invisible web, thus are ignored by search engines. SDI uses a catalogue (discovery) service for the web as a gateway to the GI world through the metadata defined by ISO standards, which are structurally diverse to OGC metadata. Therefore, a crosswalk needs to be implemented to bridge the OGC resources discovered on mainstream web with those documented by metadata in an SDI to enrich its information extent. A public global wide and user friendly portal of OGC resources available on the web ensures and enhances the use of GI within a multidisciplinary context and bridges the geospatial web from the end-user perspective, thus opens its borders to everybody. &lt;br&gt;&lt;br&gt; Project “Crosswalking the layers of geospatial information resources to enable a borderless geospatial web” with the acronym BOLEGWEB is ongoing as a postdoctoral research project at the Faculty of Geodesy, University of Zagreb in Croatia (http://bolegweb.geof.unizg.hr/). The research leading to the results of the project has received funding from the European Union Seventh Framework Programme (FP7 2007-2013) under Marie Curie FP7-PEOPLE-2011-COFUND. The project started in the November 2014 and is planned to be finished by the end of 2016. This paper provides an overview of the project, research questions and methodology, so far achieved results and future steps.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

AlRyalat, Saif Aldeen, Osama El Khatib, Ola Al-qawasmi, Hadeel Alkasrawi, Raneem al Zu’bi, Maram Abu-Halaweh, Yara alkanash, and Ibrahim Habash. "The impact of the National Heart, Lung, and Blood Institute data: analyzing published articles that used BioLINCC open access data." F1000Research 9 (January 20, 2020): 30. http://dx.doi.org/10.12688/f1000research.21884.1.

Повний текст джерела
Анотація:
Background: Data sharing is now a mandatory prerequisite for several major funders and journals, where researchers are obligated to deposit the data resulting from their studies in an openly accessible repository. Biomedical open data are now widely available in almost all disciplines, where researchers can freely access and reuse these data in new studies. We aim to assess the impact of open data in terms of publications generated using open data and citations received by these publications, where we will analyze publications that used the Biologic Specimen and Data Repository Information Coordinating Center (BioLINCC) as an example. Methods: As of July 2019, there was a total of 194 datasets stored in BioLINCC repository and accessable through their portal. We requested the full list of publications that used these datasets from BioLINCC, and we also performed a supplementary PubMed search for other publications. We used Web of Science (WoS) to analyze the characteristics of publications and the citations they received. Results: 1,086 published articles used data from BioLINCC repository, but only 987 (90.88%) articles were WoS indexed. The number of publications has steadily increased since 2002 and peaked in 2018 with a total number of 138 publications on that year. The 987 open data publications received a total of 34,181 citations up to 1st October 2019. The average citation per item for the open data publications was 34.63. The total number of citations received by open data publications per year has increased from only 2 citations in 2002, peaking in 2018 with 2361 citations. Conclusion: The vast majority of studies that used BioLINCC open data were published in WoS indexed journals and are receiving an increasing number of citations.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Massa, Marco, Davide Scafidi, Claudia Mascandola, and Alessio Lorenzetti. "Introducing ISMDq—A Web Portal for Real-Time Quality Monitoring of Italian Strong-Motion Data." Seismological Research Letters 93, no. 1 (November 10, 2021): 241–56. http://dx.doi.org/10.1785/0220210178.

Повний текст джерела
Анотація:
Abstract We present the Istituto Nazionale di Geofisica e Vulcanologia Strong-Motion Data-quality (ISMDq)—a new automatic system designed to check both continuous data stream and event strong-motion waveforms before online publication. The main purpose of ISMDq is to ensure accurate ground-motion data and derived products to be rapidly shared with monitoring authorities and the scientific community. ISMDq provides data-quality reports within minutes of the occurrence of Italian earthquakes with magnitude ≥3.0 and includes a detailed daily picture describing the performance of the target strong-motion networks. In this article, we describe and discuss the automatic procedures used by ISMDq to perform its data-quality check. Before an earthquake, ISMDq evaluates the selected waveforms through the estimation of quality indexes employed to reject bad data and/or to group approved data into classes of quality that are useful to quantify the level of reliability. The quality indexes are estimated based on comparisons with the background ambient noise level performed both in the time and frequency domains. As a consequence, new high- and low-noise reference levels are derived for the overall Italian strong-motion network, for each station, and for groups of stations in the same soil categories of the Eurocode 8 (Eurocode 8 [EC8], 2003). In absence of earthquakes, 24 hr streaming of ambient noise recordings are analyzed at each station to set an empirical threshold on selected data metrics and data availability, with the goal to build a station quality archive, which is daily updated in a time span of six months. The ISMDq is accessible online (see Data and Resources) from August 2020, providing rapid open access to ∼10,000 high-quality checked automatically processed strong-motion waveforms and metadata, relative to more than 160 Italian earthquakes with magnitude in the 3.0–5.2 range. Comparisons between selected strong-motion data automatically processed and then manually revised corroborate the reliability of the proposed procedures.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

AlRyalat, Saif Aldeen, Osama El Khatib, Ola Al-qawasmi, Hadeel Alkasrawi, Raneem al Zu’bi, Maram Abu-Halaweh, Yara alkanash, and Ibrahim Habash. "The National Heart, Lung, and Blood Institute data: analyzing published articles that used BioLINCC open access data." F1000Research 9 (August 18, 2021): 30. http://dx.doi.org/10.12688/f1000research.21884.4.

Повний текст джерела
Анотація:
Background: Data sharing is now a mandatory prerequisite for several major funders and journals, where researchers are obligated to deposit the data resulting from their studies in an openly accessible repository. Biomedical open data are now widely available in almost all disciplines, where researchers can freely access and reuse these data in new studies. We aim to study the BioLINCC datasets, number of publications that used BioLINCC open access data, and the citations received by these publications. Methods: As of July 2019, there was a total of 194 datasets stored in BioLINCC repository and accessible through their portal. We requested the full list of publications that used these datasets from BioLINCC, and we also performed a supplementary PubMed search for other publications. We used Web of Science (WoS) to analyze the characteristics of publications and the citations they received, where WoS database index high quality articles. Results: 1,086 published articles used data from BioLINCC repository for 79 (40.72%) datasets, where 115 (59.28%) datasets did not have any publications associated with it. Of the total publications, 987 (90.88%) articles were WoS indexed. The number of publications has steadily increased since 2002 and peaked in 2018 with a total number of 138 publications on that year. The 987 open data publications (i.e., secondary publications) received a total of 34,181 citations up to 1 st October 2019. The average citation per item for the open data publications was 34.63. The total number of citations received by open data publications per year has increased from only 2 citations in 2002, peaking in 2018 with 2361 citations. Conclusion: Majority of BioLINCC datasets were not used in secondary publications. Despite that, the datasets used for secondary publications yielded publications in WoS indexed journals and are receiving an increasing number of citations.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Kong, Ningning Nicole. "One store has all? - the backend story of managing geospatial information toward an easy discovery." IASSIST Quarterly 42, no. 4 (February 22, 2019): 1–14. http://dx.doi.org/10.29173/iq927.

Повний текст джерела
Анотація:
Geospatial data includes many formats, varying from historical paper maps, to digital information collected by various sensors. Many libraries have started the efforts to build a geospatial data portal to connect users with the various information. For example, GeoBlacklight and OpenGeoportal are two open-source projects that initiated from academic institutions which have been adopted by many universities and libraries for geospatial data discovery. While several recent studies have focused on the metadata, usability and data collection perspectives of geospatial data portals, not many have explored the backend stories about data management to support the data discovery platform. The objective of this paper is to provide a summary about geospatial data management strategies involved in the geospatial data portal development by reviewing case studies. These data management strategies include managing the historical paper maps, scanned maps, aerial photos, research generated geospatial information, and web map services. This paper focuses on the data organization, storage, cyberinfrustracture configuration, preservation and sharing perspectives of these efforts with the goal to provide a range of options or best management practices for information managers when curating geospatial data in their own institutions.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

AlRyalat, Saif Aldeen, Osama El Khatib, Ola Al-qawasmi, Hadeel Alkasrawi, Raneem al Zu’bi, Maram Abu-Halaweh, Yara alkanash, and Ibrahim Habash. "The impact of the National Heart, Lung, and Blood Institute data: analyzing published articles that used BioLINCC open access data." F1000Research 9 (September 28, 2020): 30. http://dx.doi.org/10.12688/f1000research.21884.2.

Повний текст джерела
Анотація:
Background: Data sharing is now a mandatory prerequisite for several major funders and journals, where researchers are obligated to deposit the data resulting from their studies in an openly accessible repository. Biomedical open data are now widely available in almost all disciplines, where researchers can freely access and reuse these data in new studies. We aim to study the BioLINCC datasets, number of publications that used BioLINCC open access data, and the impact of these publications through the citations they received. Methods: As of July 2019, there was a total of 194 datasets stored in BioLINCC repository and accessible through their portal. We requested the full list of publications that used these datasets from BioLINCC, and we also performed a supplementary PubMed search for other publications. We used Web of Science (WoS) to analyze the characteristics of publications and the citations they received. Results: 1,086 published articles used data from BioLINCC repository for 79 (40.72%) datasets, where 115 (59.28%) datasets didn’t have any publications associated with it. Of the total publications, 987 (90.88%) articles were WoS indexed. The number of publications has steadily increased since 2002 and peaked in 2018 with a total number of 138 publications on that year. The 987 open data publications received a total of 34,181 citations up to 1st October 2019. The average citation per item for the open data publications was 34.63. The total number of citations received by open data publications per year has increased from only 2 citations in 2002, peaking in 2018 with 2361 citations. Conclusion: Majority of BioLINCC datasets were not used in secondary publications. Despite that, the datasets used for secondary publications yielded publications in WoS indexed journals and are receiving an increasing number of citations.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

AlRyalat, Saif Aldeen, Osama El Khatib, Ola Al-qawasmi, Hadeel Alkasrawi, Raneem al Zu’bi, Maram Abu-Halaweh, Yara alkanash, and Ibrahim Habash. "The impact of the National Heart, Lung, and Blood Institute data: analyzing published articles that used BioLINCC open access data." F1000Research 9 (April 21, 2021): 30. http://dx.doi.org/10.12688/f1000research.21884.3.

Повний текст джерела
Анотація:
Background: Data sharing is now a mandatory prerequisite for several major funders and journals, where researchers are obligated to deposit the data resulting from their studies in an openly accessible repository. Biomedical open data are now widely available in almost all disciplines, where researchers can freely access and reuse these data in new studies. We aim to study the BioLINCC datasets, number of publications that used BioLINCC open access data, and the impact of these publications through the citations they received. Methods: As of July 2019, there was a total of 194 datasets stored in BioLINCC repository and accessible through their portal. We requested the full list of publications that used these datasets from BioLINCC, and we also performed a supplementary PubMed search for other publications. We used Web of Science (WoS) to analyze the characteristics of publications and the citations they received, where WoS database index high quality articles. Results: 1,086 published articles used data from BioLINCC repository for 79 (40.72%) datasets, where 115 (59.28%) datasets didn’t have any publications associated with it. Of the total publications, 987 (90.88%) articles were WoS indexed. The number of publications has steadily increased since 2002 and peaked in 2018 with a total number of 138 publications on that year. The 987 open data publications received a total of 34,181 citations up to 1 st October 2019. The average citation per item for the open data publications was 34.63. The total number of citations received by open data publications per year has increased from only 2 citations in 2002, peaking in 2018 with 2361 citations. Conclusion: Majority of BioLINCC datasets were not used in secondary publications. Despite that, the datasets used for secondary publications yielded publications in WoS indexed journals and are receiving an increasing number of citations.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Spondylidis, Spyros, Konstantinos Topouzelis, Dimitris Kavroudakis, and Michail Vaitis. "Mesoscale Ocean Feature Identification in the North Aegean Sea with the Use of Sentinel-3 Data." Journal of Marine Science and Engineering 8, no. 10 (September 25, 2020): 740. http://dx.doi.org/10.3390/jmse8100740.

Повний текст джерела
Анотація:
The identification of oceanographic circulation related features is a valuable tool for environmental and fishery management authorities, commercial use and institutional research. Remote sensing techniques are suitable for detection, as in situ measurements are prohibitively costly, spatially sparse and infrequent. Still, these imagery applications require a certain level of technical and theoretical skill making them practically unreachable to the immediate beneficiaries. In this paper a new geospatial web service is proposed for providing daily data on mesoscale oceanic feature identification in the North Aegean Sea, produced by Sentinel-3 SLSTR Sea Surface Temperature (SST) imagery, to end users. The service encompasses an automated process for: raw data acquisition, interpolation, oceanic feature extraction and publishing through a webGIS application. Level-2 SST data are interpolated through a Co-Kriging algorithm, involving information from short term historical data, in order to retain as much information as possible. A modified gradient edge detection methodology is then applied to the interpolated products for the mesoscale feature extraction. The resulting datasets are served according to the Open Geospatial Consortium (OGC) standards and are available for visualization, processing and download though a dedicated web portal.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Fairley, Susan, Ernesto Lowy-Gallego, Emily Perry, and Paul Flicek. "The International Genome Sample Resource (IGSR) collection of open human genomic variation resources." Nucleic Acids Research 48, no. D1 (October 4, 2019): D941—D947. http://dx.doi.org/10.1093/nar/gkz836.

Повний текст джерела
Анотація:
Abstract To sustain and develop the largest fully open human genomic resources the International Genome Sample Resource (IGSR) (https://www.internationalgenome.org) was established. It is built on the foundation of the 1000 Genomes Project, which created the largest openly accessible catalogue of human genomic variation developed from samples spanning five continents. IGSR (i) maintains access to 1000 Genomes Project resources, (ii) updates 1000 Genomes Project resources to the GRCh38 human reference assembly, (iii) adds new data generated on 1000 Genomes Project cell lines, (iv) shares data from samples with a similarly open consent to increase the number of samples and populations represented in the resources and (v) provides support to users of these resources. Among recent updates are the release of variation calls from 1000 Genomes Project data calculated directly on GRCh38 and the addition of high coverage sequence data for the 2504 samples in the 1000 Genomes Project phase three panel. The data portal, which facilitates web-based exploration of the IGSR resources, has been updated to include samples which were not part of the 1000 Genomes Project and now presents a unified view of data and samples across almost 5000 samples from multiple studies. All data is fully open and publicly accessible.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Thiemann, F., M. Schulze, and U. Böhner. "STATE-WIDE CALCULATION OF TERRAIN-VISUALISATIONS AND AUTOMATIC MAP GENERATION FOR ARCHAEOLOGICAL OBJECTS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (June 28, 2021): 907–13. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-907-2021.

Повний текст джерела
Анотація:
Abstract. Airborne laser scanning (ALS) became very popular in the last two decades for archaeological prospection. With the state-wide availability of ALS-data in Lower Saxony, Germany, about 48,000 km², we needed flexible and scalable approaches to process the data. First, we produced a state-wide digital terrain model (DTM) and some visualisations of it to use it in standard GIS software. Some of these visualisations are available as web maps and used for prospection also by volunteers. In a second approach, we automatically generate maps for all known archaeological objects. This is mainly used for the documentation of the 130,000 known objects in Lower Saxony, but also for object-by-object revision of the database. These Maps will also be presented in the web portal “Denkmalatlas Niedersachsen”, an open data imitative of the state Lower Saxony.In the first part of this paper, we show how the state-wide DTM and its visualisations can be calculated using tiles. In the second part, we describe the automatic map generation process. All implementations were done with ArcGIS and its scripting interface ArcPy.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Sha, Dexuan, Xin Miao, Mengchao Xu, Chaowei Yang, Hongjie Xie, Alberto M. Mestas-Nuñez, Yun Li, Qian Liu, and Jingchao Yang. "An On-Demand Service for Managing and Analyzing Arctic Sea Ice High Spatial Resolution Imagery." Data 5, no. 2 (April 17, 2020): 39. http://dx.doi.org/10.3390/data5020039.

Повний текст джерела
Анотація:
Sea ice acts as both an indicator and an amplifier of climate change. High spatial resolution (HSR) imagery is an important data source in Arctic sea ice research for extracting sea ice physical parameters, and calibrating/validating climate models. HSR images are difficult to process and manage due to their large data volume, heterogeneous data sources, and complex spatiotemporal distributions. In this paper, an Arctic Cyberinfrastructure (ArcCI) module is developed that allows a reliable and efficient on-demand image batch processing on the web. For this module, available associated datasets are collected and presented through an open data portal. The ArcCI module offers an architecture based on cloud computing and big data components for HSR sea ice images, including functionalities of (1) data acquisition through File Transfer Protocol (FTP) transfer, front-end uploading, and physical transfer; (2) data storage based on Hadoop distributed file system and matured operational relational database; (3) distributed image processing including object-based image classification and parameter extraction of sea ice features; (4) 3D visualization of dynamic spatiotemporal distribution of extracted parameters with flexible statistical charts. Arctic researchers can search and find arctic sea ice HSR image and relevant metadata in the open data portal, obtain extracted ice parameters, and conduct visual analytics interactively. Users with large number of images can leverage the service to process their image in high performance manner on cloud, and manage, analyze results in one place. The ArcCI module will assist domain scientists on investigating polar sea ice, and can be easily transferred to other HSR image processing research projects.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Kalayci, Selim, Myvizhi Esai Selvan, Irene Ramos, Chris Cotsapas, Eva Harris, Eun-Young Kim, Ruth R. Montgomery, et al. "ImmuneRegulation: a web-based tool for identifying human immune regulatory elements." Nucleic Acids Research 47, W1 (May 22, 2019): W142—W150. http://dx.doi.org/10.1093/nar/gkz450.

Повний текст джерела
Анотація:
Abstract Humans vary considerably both in their baseline and activated immune phenotypes. We developed a user-friendly open-access web portal, ImmuneRegulation, that enables users to interactively explore immune regulatory elements that drive cell-type or cohort-specific gene expression levels. ImmuneRegulation currently provides the largest centrally integrated resource on human transcriptome regulation across whole blood and blood cell types, including (i) ∼43,000 genotyped individuals with associated gene expression data from ∼51,000 experiments, yielding genetic variant-gene expression associations on ∼220 million eQTLs; (ii) 14 million transcription factor (TF)-binding region hits extracted from 1945 ChIP-seq studies; and (iii) the latest GWAS catalog with 67,230 published variant-trait associations. Users can interactively explore associations between queried gene(s) and their regulators (cis-eQTLs, trans-eQTLs or TFs) across multiple cohorts and studies. These regulators may explain genotype-dependent gene expression variations and be critical in selecting the ideal cohorts or cell types for follow-up studies or in developing predictive models. Overall, ImmuneRegulation significantly lowers the barriers between complex immune regulation data and researchers who want rapid, intuitive and high-quality access to the effects of regulatory elements on gene expression in multiple studies to empower investigators in translating these rich data into biological insights and clinical applications, and is freely available at https://immuneregulation.mssm.edu.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії