Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: GitHub Pages.

Zeitschriftenartikel zum Thema „GitHub Pages“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-15 Zeitschriftenartikel für die Forschung zum Thema "GitHub Pages" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Utomo, Prayudi, und Falahah. „Building Serverless Website on GitHub Pages“. IOP Conference Series: Materials Science and Engineering 879 (07.08.2020): 012077. http://dx.doi.org/10.1088/1757-899x/879/1/012077.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Thornberry, Evan, und Phil White. „GitHub and Jekyll for Publishing GIS Workshop Content“. Bulletin - Association of Canadian Map Libraries and Archives (ACMLA), Nr. 166 (04.12.2020): 25–30. http://dx.doi.org/10.15353/acmla.n166.3463.

Der volle Inhalt der Quelle
Annotation:
In this article, we describe GitHub in simple terms and demonstrate its practical value as a platform for delivering workshop instruction. This article stems from a virtual pre-conference workshop we delivered at the 2020 annual meeting of the Western Association of Map Libraries (WAML). We describe an easily replicated workflow for publishing workshop materials and documentation to the web using GitHub Pages and provide a GitHub repository that readers of this article can copy and customize to suit their own workshop needs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Loginova, Julia, und Pia Wohland. „How to create an interactive dashboard using R: the example of the Queensland COVID-19 tracker“. Australian Population Studies 4, Nr. 2 (14.11.2020): 39–47. http://dx.doi.org/10.37970/aps.v4i2.72.

Der volle Inhalt der Quelle
Annotation:
Background Interactive tools like data dashboards enable users both to view and interact with data. In today’s data-driven environment it is a priority for researchers and practitioners alike to be able to develop interactive data visualisation tools easily and where possible at a low cost. Aims Here, we provide a guide on how to develop and create an interactive online data dashboard in R, using the COVID-19 tracker for Health and Hospital Regions in Queensland, Australia as an example. We detail a series of steps and explain choices made to design, develop, and easily maintain the dashboard and publish it online. Data and methods The dashboard visualises publicly available data from the Queensland Health web page. We used the programming language R and its free software environment. The dashboard webpage is hosted publicly on GitHub Pages updated via GitHub Desktop. Results Our interactive dashboard is available at https://qcpr.github.io/. Conclusions Interactive dashboards have many applications such as dissemination of research and other data. This guide and the supplementary material can be adjusted to develop a new dashboard for a different set of data and needs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Dalgleish, James LT, Yonghong Wang, Jack Zhu und Paul S. Meltzer. „CNVScope: Visually Exploring Copy Number Aberrations in Cancer Genomes“. Cancer Informatics 18 (Januar 2019): 117693511989029. http://dx.doi.org/10.1177/1176935119890290.

Der volle Inhalt der Quelle
Annotation:
Motivation: DNA copy number (CN) data are a fast-growing source of information used in basic and translational cancer research. Most CN segmentation data are presented without regard to the relationship between chromosomal regions. We offer both a toolkit to help scientists without programming experience visually explore the CN interactome and a package that constructs CN interactomes from publicly available data sets. Results: The CNVScope visualization, based on a publicly available neuroblastoma CN data set, clearly displays a distinct CN interaction in the region of the MYCN, a canonical frequent amplicon target in this cancer. Exploration of the data rapidly identified cis and trans events, including a strong anticorrelation between 11q loss and17q gain with the region of 11q loss bounded by the cell cycle regulator CCND1. Availability: The shiny application is readily available for use at http://cnvscope.nci.nih.gov/ , and the package can be downloaded from CRAN ( https://cran.r-project.org/package=CNVScope ), where help pages and vignettes are located. A newer version is available on the GitHub site ( https://github.com/jamesdalg/CNVScope/ ), which features an animated tutorial. The CNVScope package can be locally installed using instructions on the GitHub site for Windows and Macintosh systems. This CN analysis package also runs on a linux high-performance computing cluster, with options for multinode and multiprocessor analysis of CN variant data. The shiny application can be started using a single command (which will automatically install the public data package).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Benahmed Daho, A. „CRYPTO-SPATIAL: AN OPEN STANDARDS SMART CONTRACTS LIBRARY FOR BUILDING GEOSPATIALLY ENABLED DECENTRALIZED APPLICATIONS ON THE ETHEREUM BLOCKCHAIN“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B4-2020 (25.08.2020): 421–26. http://dx.doi.org/10.5194/isprs-archives-xliii-b4-2020-421-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. Blockchain is an emerging immature technology that disrupt many well established industries nowadays, like finance, supply chain, transportation, energy, official registries (identity, vehicles, …). In this contribution we present a smart contracts library, named Crypto-Spatial, written for the Ethereum Blockchain and designed to serve as a framework for geospatially enabled decentralized applications (dApps) development. The main goal of this work is to investigate the suitability of Blockchain technology for the storage, retrieval and processing of vector geospatial data. The design and the proof-of-concept implementation presented are both based on the Open Geospatial Consortium standards: Simple Feature Access, Discrete Global Grid Systems (DGGS) and Well Known Binary (WKB). Also, the FOAM protocol concept of Crypto-Spatial Coordinate (CSC) was used to uniquely identify spatial features on the Blockchain immutable ledger. The design of the Crypto-Spatial framework was implemented as a set of smart contracts using the Solidity object oriented programming language. The implemented library was assessed toward Etheruem’s best practices design patterns and known security issues (common attacks). Also, a generic architecture for geospatially enabled decentralized applications, combining blockchain and IPFS technologies, was proposed. Finally, a proof-of-concept was developed using the proposed approach which main purpose is to port the UN/FAO-SOLA to Blockchain techspace allowing more transparency and simplifying access to users communities. The smart contracts of this prototype are live on the Rinkeby testnet and the frontend is hosted on Github pages. The source code of the work presented here is available on Github under Apache 2.0 license.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Mungall, Christopher J., und Ian H. Holmes. „WTFgenes: What's The Function of these genes? Static sites for model-based gene set analysis“. F1000Research 6 (04.04.2017): 423. http://dx.doi.org/10.12688/f1000research.11175.1.

Der volle Inhalt der Quelle
Annotation:
A common technique for interpreting experimentally-identified lists of genes is to look for enrichment of genes associated with particular ontology terms. The most common test uses the hypergeometric distribution; more recently, a model-based test was proposed. These approaches must typically be run using downloaded software, or on a server. We develop a collapsed likelihood for model-based gene set analysis and present WTFgenes, an implementation of both hypergeometric and model-based approaches, that can be published as a static site with computation run in JavaScript on the user's web browser client. Apart from hosting files, zero server resources are required: the site can (for example) be served directly from Amazon S3 or GitHub Pages. A C++11 implementation yielding identical results runs roughly twice as fast as the JavaScript version. WTFgenes is available from https://github.com/evoldoers/wtfgenes under the BSD3 license. A demonstration for the Gene Ontology is usable at https://evoldoers.github.io/wtfgo.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Braun, Bremen L., David A. Schott, John L. Portwood, Carson M. Andorf und Taner Z. Sen. „PedigreeNet: a web-based pedigree viewer for biological databases“. Bioinformatics 35, Nr. 20 (23.03.2019): 4184–86. http://dx.doi.org/10.1093/bioinformatics/btz208.

Der volle Inhalt der Quelle
Annotation:
Abstract Motivation Plant breeding aims to improve current germplasm that can tolerate a wide range of biotic and abiotic stresses. To accomplish this goal, breeders rely on developing a deeper understanding of genetic makeup and relationships between plant varieties to make informed plant selections. Although rapid advances in genotyping technology generated a large amount of data for breeders, tools that facilitate pedigree analysis and visualization are scant, leaving breeders to use classical, but inherently limited, hierarchical pedigree diagrams for a handful of plant varieties. To answer this need, we developed a simple web-based tool that can be easily implemented at biological databases, called PedigreeNet, to create and visualize customizable pedigree relationships in a network context, displaying pre- and user-uploaded data. Results As a proof-of-concept, we implemented PedigreeNet at the maize model organism database, MaizeGDB. The PedigreeNet viewer at MaizeGDB has a dynamically-generated pedigree network of 4706 maize lines and 5487 relationships that are currently available as both a stand-alone web-based tool and integrated directly on the MaizeGDB Stock Pages. The tool allows the user to apply a number of filters, select or upload their own breeding relationships, center a pedigree network on a plant variety, identify the common ancestor between two varieties, and display the shortest path(s) between two varieties on the pedigree network. The PedigreeNet code layer is written as a JavaScript wrapper around Cytoscape Web. PedigreeNet fills a great need for breeders to have access to an online tool to represent and visually customize pedigree relationships. Availability and implementation PedigreeNet is accessible at https://www.maizegdb.org/breeders_toolbox. The open source code is publically and freely available at GitHub: https://github.com/Maize-Genetics-and-Genomics-Database/PedigreeNet. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Visconti, Amanda. „Creación de sitios estáticos con Jekyll y GitHub Pages“. Programming Historian en español, Nr. 5 (01.03.2021). http://dx.doi.org/10.46430/phes0050.

Der volle Inhalt der Quelle
Annotation:
Esta lección te ayudará a crear un sitio web seguro completamente gratuito, fácil de mantener y sobre el que tendrás control total, como un blog académico, un sitio web de proyectos o un portafolio en línea.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Visconti, Amanda. „Building a static website with Jekyll and GitHub Pages“. Programming Historian, Nr. 5 (18.04.2016). http://dx.doi.org/10.46430/phen0048.

Der volle Inhalt der Quelle
Annotation:
This lesson will help you create entirely free, easy-to-maintain, preservation-friendly, secure website over which you have full control, such as a scholarly blog, project website, or online portfolio.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Patel, Vraj Vishnubhai, Dr Priyanka Sharma und Jatin Patel. „Subdomain Takeover : A Challenge as Web App Vulnerability or Server-Side Vulnerability“. International Journal of Scientific Research in Science, Engineering and Technology, 08.05.2021, 58–64. http://dx.doi.org/10.32628/ijsrset21837.

Der volle Inhalt der Quelle
Annotation:
A subdomain is a domain that is a part of another domain. Subdomains are used to organize and navigate to various parts of your website. For example, your primary domain could be “xyz.com,” while your blog could be on a subdomain at “blog.xyz.com.” A subdomain takeover occurs when an attacker gains control over a subdomain of a target domain. Sub-domain takeover vulnerability occurs When a subdomain (subdomain.example.com) that refers to a service (eg GitHub, AWS / S3, ..) is deleted or deleted In this way, the attacker can create pages on the service in use and forward the pages to this subdomain.. If any person wants to take over, a subdomain then the person seeks to manually check one by one subdomain that takes too much time. Moreover, are there some tools available to check the subdomain takeover is possible or not? However, these tools take input as a text file, which has a particular subdomain. This means finding a subdomain with the other tools and then using one of these tools to identify subdomain takeover vulnerability. In my tools, we find the subdomain of a particular domain, then check the CNAME is available for a list of subdomains and if CNAME finds for a specific subdomain, then check the status code of the CNAME if it returns 404-status code. We might say that a particular subdomain is possible to takeover.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Toman, Zinah Hussein, Sarah Hussein Toman und Manar Joundy Hazar. „An In-Depth Comparison Of Software Frameworks For Developing Desktop Applications Using Web Technologies“. Journal of Southwest Jiaotong University 54, Nr. 4 (2019). http://dx.doi.org/10.35741/issn.0258-2724.54.4.1.

Der volle Inhalt der Quelle
Annotation:
Today JavaScript is one of the most popular and fastest growing programming languages. Initially designed as a web browser scripting language, its adoption has reached beyond web pages: the Internet of Things, mobile and desktop applications. Lately, an increased interest can be observed over developing desktop software using JavaScript and other web technologies such as HTML and CSS. Many popular software products followed this path: Skype, Visual Studio Code, Atom, Brackets, Light Table, Microsoft Teams, Microsoft SQL Operations Studio, GitHub Desktop, Signal, etc. The aim of this article is to aid developers to choose the right framework for their needs, through a comprehensive side-by-side comparison of Electron and NW.js, the two frameworks available for developing desktop software with JavaScript, HTML and CSS. Electron despite being a younger project. It was concluded in this article that this software framework outperforms NW.js in the matter of capabilities on most areas such as file system, user interface, system integration and multimedia; except printing. However, NW.js is easier to use and debug.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Schneider, Florian, David Fichtmüller, Martin Gossner, Anton Güntsch, Malte Jochum, Birgitta Koenig-Ries, Gaëtane Le Provost et al. „Towards an Ecological Trait-data Standard Vocabulary“. Biodiversity Information Science and Standards 3 (02.07.2019). http://dx.doi.org/10.3897/biss.3.37612.

Der volle Inhalt der Quelle
Annotation:
Trait-based research spans from evolutionary studies of individual-level properties to global patterns of biodiversity and ecosystem functioning. An increasing number of trait data is available for many different organism groups, published as open access data on a variety of file hosting services. Thus, standardization between datasets is generally lacking due to heterogeneous data formats and types. The compilation of these published data into centralised databases remains a difficult and time-consuming task. We reviewed existing trait databases and online services, as well as initiatives for trait data standardization. Together with data providers and users participating in a large long-term observation project on multiple taxa and research questions (the Biodiversity Exploratories, www.biodiversity-exploratories.de), we identified a need for a minimal trait-data terminology that is flexible enough to include traits from all types of organisms but simple enough to be adopted by different research communities. In order to facilitate reproducibility of analyses, the reuse of data and the combination of datasets from multiple sources, we propose a standardized vocabulary for trait data, the Ecological Trait-data Standard Vocabulary (ETS, hosted on GFBio Terminology Service, https://terminologies.gfbio.org/terms/ets/pages), which builds upon and is compatible with existing ontologies. By relying on unambiguous identifiers, the proposed minimal vocabulary for trait data captures the different degrees of resolution and measurement detail for multiple use cases of trait-based research. It further encourages the use of global Uniform Resource Identifiers (URI) for taxa and trait definitions, methods and units, thereby readying the data publication for the semantic web. An accompanying R-package (traitdataform) facilitates the upload of data to hosting services but also simplifies the access to published trait data. While originating from a current need in ecological research, in the next step, the described products are being developed for a seamless fit with broader initiatives on biodiversity data standardisation to foster a better linkage of ecological trait data and global e-infrastructures for biological data. The ETS is maintained and discussion on terms are managed via Github (https://github.com/EcologicalTraitData/ETS).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Fichtmueller, David, Walter G. Berendsohn, Gabriele Droege, Falko Glöckler, Anton Güntsch, Jana Hoffmann, Jörg Holetschek, Mareike Petersen und Fabian Reimeier. „ABCD 3.0 Ready to Use“. Biodiversity Information Science and Standards 3 (19.06.2019). http://dx.doi.org/10.3897/biss.3.37214.

Der volle Inhalt der Quelle
Annotation:
The TDWG standard ABCD (Access to Biological Collections Data task group 2007) was aimed at harmonizing terminologies used for modelling biological collection information and is used as a comprehensive data format for transferring collection and observation data between software components. The project ABCD 3.0 (A community platform for the development and documentation of the ABCD standard for natural history collections) was financed by the German Research Council (DFG). It addressed the transformation of ABCD into a semantic web-compliant ontology by deconstructing the XML-schema into individually addressable RDF (Resource Description Framework) resources published via the TDWG Terms Wiki (https://terms.tdwg.org/wiki/ABCD_2). In a second step, informal properties and concept-relations described by the original ABCD-schema were transformed into a machine-readable ontology and revised (Güntsch et al. 2016). The project was successfully finished in January 2019. The ABCD 3 setup allows for the creation of standard-conforming application schemas. The XML variant of ABCD 3.0 was restructured, simplified and made more consistent in terms of element names and types as compared to version 2.x. The XML elements are connected to their semantic concepts using the W3C SAWSDL (Semantic Annotation for Web Services Description Language and XML Schema) standard. The creation of specialized applications schemas is encouraged, the first use case was the application schema for zoology. It will also be possible to generate application schemas that break the traditional unit-centric structure of ABCD. Further achievements of the project include creating a Wikibase instance as the editing platform, with related tools for maintenance queries, such as checking for inconsistencies in the ontology and automated export into RDF. This allows for fast iterations of new or updated versions, e.g. when additional mappings to other standards are done. The setup is agnostic to the data standard created, it can therefore also be used to create or model other standards. Mappings to other standards like Darwin Core (https://dwc.tdwg.org/) and Audubon Core (https://tdwg.github.io/ac/) are now machine readable as well. All XPaths (XML Paths) of ABCD 3.0 XML have been mapped to all variants of ABCD 2.06 and 2.1, which will ease transition to the new standard. The ABCD 3 Ontology will also be uploaded to the GFBio Terminology Server (Karam et al. 2016), where individual concepts can be easily searched or queried, allowing for better interactive modelling of ABCD concepts. ABCD documentation now adheres to TDWG’s Standards Documentation Standard (SDS, https://www.tdwg.org/standards/sds/) and is located at https://abcd.tdwg.org/. The new site is hosted on Github: https://github.com/tdwg/abcd/tree/gh-pages.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Simoes, Felipe, Donat Agosti und Marcus Guidoti. „Delivering Fit-for-Use Data: Quality control“. Biodiversity Information Science and Standards 5 (20.09.2021). http://dx.doi.org/10.3897/biss.5.75432.

Der volle Inhalt der Quelle
Annotation:
Automatic data mining is not an easy task and its success in the biodiversity world is deeply tied to the standardization and consistency of scientific journals' layout structure. The various formatting styles found in the over 500 million pages of published biodiversity information (Kalfatovich 2010), pose a remarkable challenge towards the goal of automating the liberation of data currently trapped on the printed page. Regular expressions and other pattern-recognition strategies invariably fail to cope with this diverse landscape of academic publishing. Challenges such as incomplete data and taxonomic uncertainty add several additional layers of complexity. However, in the era of big data, the liberation of all the different facts contained in biodiversity literature is of crucial importance. Plazi tackles this daunting task by providing workflows and technology to automatically process biodiversity publications and annotate the information therein, all within the principles of FAIR (findable, accessible, interoperable, and reusable) data usage (Agosti and Egloff 2009). It uses the concept of taxonomic treatments (Catapano 2019) as the most fundamental unit in biodiversity literature, to provide a framework that reflects the reality of taxonomic data for linking the different pieces of information contained in these taxonomic treatments. Treatment citations, composed of a taxonomic name and a bibliographic reference, and material citations carrying all specimen-related information are additional conceptual cornerstones for this framework. The resulting enhanced data are added to TreatmentBank. Figures and treatments are made Findable, Accessible, Interoperable and Reuseable (FAIR) by depositing them including specific metadata to the Biodiversity Literature Repository community (BLR) at the European Organization for Nuclear Research (CERN) repository Zenodo, and pushed to GBIF. The automation, however, is error prone due to the constraints explained above. In order to cope with this remarkable task without compromising data quality, Plazi has established a quality control process, based on logical rules that check the components of the extracted document raising errors in four different levels of severity. These errors are also used in a data transit control mechanism, “the gatekeeper”, which blocks certain data transits to create deposits (e.g., BLR) or reuse of data (e.g., GBIF) in the presence of specific errors. Finally, a set of automatic notifications were included in the plazi/community Github repository, in order to provide a channel that empowers external users to report data issues directly to a dedicated team of data miners, which will in turn and in a timely manner, fix these issues, improving data quality on demand. In this talk, we aim to explain Plazi’s internal quality control process and phases, the data transits that are potentially affected, as well as statistics on the most common issues raised by this automated endeavor and how we use the generated data to continuously improve this important step in Plazi's workflow.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Tørdal, Sondre Sanden, Andreas Klausen und Mette Mo Jakobsen. „Case study: Employing Agile Tools in Teaching Product Development to Mechatronics Students“. Nordic Journal of STEM Education 5, Nr. 1 (24.02.2021). http://dx.doi.org/10.5324/njsteme.v5i1.3900.

Der volle Inhalt der Quelle
Annotation:
Agile tools such as Git are widely used in the industry for source control, collaboration and documentation. Such tools have been implemented in a mechatronic product development course to allow for easier collaboration between students. The course content is mainly provided using a GitLab Pages webpage which hosts software documentation and scripts. This course was first changed in 2019 to include the development of an autonomous strawberry picker. However, the use of standard learning management system and lecture slides provided a cumbersome experience for the students. Therefore, these agile tools were presented in 2020 version to improve the course. In this paper, the course content is detailed, and student feedback from both years are discussed to reveal the outcome of the changes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie