To see the other types of publications on this topic, follow the link: Domain Name System over HTTPS.

Journal articles on the topic 'Domain Name System over HTTPS'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Domain Name System over HTTPS.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

M. Banadaki, Yaser. "Detecting Malicious DNS over HTTPS Traffic in Domain Name System using Machine Learning Classifiers." Journal of Computer Sciences and Applications 8, no. 2 (August 20, 2020): 46–55. http://dx.doi.org/10.12691/jcsa-8-2-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Singanamalla, Sudheesh, Suphanat Chunhapanya, Jonathan Hoyland, Marek Vavruša, Tanya Verma, Peter Wu, Marwan Fayed, Kurtis Heimerl, Nick Sullivan, and Christopher Wood. "Oblivious DNS over HTTPS (ODoH): A Practical Privacy Enhancement to DNS." Proceedings on Privacy Enhancing Technologies 2021, no. 4 (July 23, 2021): 575–92. http://dx.doi.org/10.2478/popets-2021-0085.

Full text
Abstract:
Abstract The Internet’s Domain Name System (DNS) responds to client hostname queries with corresponding IP addresses and records. Traditional DNS is unencrypted and leaks user information to on-lookers. Recent efforts to secure DNS using DNS over TLS (DoT) and DNS over HTTPS (DoH) have been gaining traction, ostensibly protecting DNS messages from third parties. However, the small number of available public large-scale DoT and DoH resolvers has reinforced DNS privacy concerns, specifically that DNS operators could use query contents and client IP addresses to link activities with identities. Oblivious DNS over HTTPS (ODoH) safeguards against these problems. In this paper we implement and deploy interoperable instantiations of the protocol, construct a corresponding formal model and analysis, and evaluate the protocols’ performance with wide-scale measurements. Results suggest that ODoH is a practical privacy-enhancing replacement for DNS.
APA, Harvard, Vancouver, ISO, and other styles
3

Zain ul Abideen, Muhammad, Shahzad Saleem, and Madiha Ejaz. "VPN Traffic Detection in SSL-Protected Channel." Security and Communication Networks 2019 (October 29, 2019): 1–17. http://dx.doi.org/10.1155/2019/7924690.

Full text
Abstract:
In recent times, secure communication protocols over web such as HTTPS (Hypertext Transfer Protocol Secure) are being widely used instead of plain web communication protocols like HTTP (Hypertext Transfer Protocol). HTTPS provides end-to-end encryption between the user and service. Nowadays, organizations use network firewalls and/or intrusion detection and prevention systems (IDPS) to analyze the network traffic to detect and protect against attacks and vulnerabilities. Depending on the size of organization, these devices may differ in their capabilities. Simple network intrusion detection system (NIDS) and firewalls generally have no feature to inspect HTTPS or encrypted traffic, so they rely on unencrypted traffic to manage the encrypted payload of the network. Recent and powerful next-generation firewalls have Secure Sockets Layer (SSL) inspection feature which are expensive and may not be suitable for every organizations. A virtual private network (VPN) is a service which hides real traffic by creating SSL-protected channel between the user and server. Every Internet activity is then performed under the established SSL tunnel. The user inside the network with malicious intent or to hide his activity from the network security administration of the organization may use VPN services. Any VPN service may be used by users to bypass the filters or signatures applied on network security devices. These services may be the source of new virus or worm injected inside the network or a gateway to facilitate information leakage. In this paper, we have proposed a novel approach to detect VPN activity inside the network. The proposed system analyzes the communication between user and the server to analyze and extract features from network, transport, and application layer which are not encrypted and classify the incoming traffic as malicious, i.e., VPN traffic or standard traffic. Network traffic is analyzed and classified using DNS (Domain Name System) packets and HTTPS- (Hypertext Transfer Protocol Secure-) based traffic. Once traffic is classified, the connection based on the server’s IP, TCP port connected, domain name, and server name inside the HTTPS connection is analyzed. This helps in verifying legitimate connection and flags the VPN-based traffic. We worked on top five freely available VPN services and analyzed their traffic patterns; the results show successful detection of the VPN activity performed by the user. We analyzed the activity of five users, using some sort of VPN service in their Internet activity, inside the network. Out of total 729 connections made by different users, 329 connections were classified as legitimate activity, marking 400 remaining connections as VPN-based connections. The proposed system is lightweight enough to keep minimal overhead, both in network and resource utilization and requires no specialized hardware.
APA, Harvard, Vancouver, ISO, and other styles
4

Di Martino, Mariano, Peter Quax, and Wim Lamotte. "Knocking on IPs: Identifying HTTPS Websites for Zero-Rated Traffic." Security and Communication Networks 2020 (August 28, 2020): 1–14. http://dx.doi.org/10.1155/2020/7285786.

Full text
Abstract:
Zero-rating is a technique where internet service providers (ISPs) allow consumers to utilize a specific website without charging their internet data plan. Implementing zero-rating requires an accurate website identification method that is also efficient and reliable to be applied on live network traffic. In this paper, we examine existing website identification methods with the objective of applying zero-rating. Furthermore, we demonstrate the ineffectiveness of these methods against modern encryption protocols such as Encrypted SNI and DNS over HTTPS and therefore show that ISPs are not able to maintain the current zero-rating approaches in the forthcoming future. To address this concern, we present “Open-Knock,” a novel approach that is capable of accurately identifying a zero-rated website, thwarts free-riding attacks, and is sustainable on the increasingly encrypted web. In addition, our approach does not require plaintext protocols or preprocessed fingerprints upfront. Finally, our experimental analysis unveils that we are able to convert each IP address to the correct domain name for each website in the Tranco top 6000 websites list with an accuracy of 50.5% and therefore outperform the current state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
5

Dahlberg, Rasmus, Tobias Pulls, Tom Ritter, and Paul Syverson. "Privacy-Preserving & Incrementally-Deployable Support for Certificate Transparency in Tor." Proceedings on Privacy Enhancing Technologies 2021, no. 2 (January 29, 2021): 194–213. http://dx.doi.org/10.2478/popets-2021-0024.

Full text
Abstract:
Abstract The security of the web improved greatly throughout the last couple of years. A large majority of the web is now served encrypted as part of HTTPS, and web browsers accordingly moved from positive to negative security indicators that warn the user if a connection is insecure. A secure connection requires that the server presents a valid certificate that binds the domain name in question to a public key. A certificate used to be valid if signed by a trusted Certificate Authority (CA), but web browsers like Google Chrome and Apple’s Safari have additionally started to mandate Certificate Transparency (CT) logging to overcome the weakest-link security of the CA ecosystem. Tor and the Firefox-based Tor Browser have yet to enforce CT. In this paper, we present privacy-preserving and incrementally-deployable designs that add support for CT in Tor. Our designs go beyond the currently deployed CT enforcements that are based on blind trust: if a user that uses Tor Browser is man-in-the-middled over HTTPS, we probabilistically detect and disclose cryptographic evidence of CA and/or CT log misbehavior. The first design increment allows Tor to play a vital role in the overall goal of CT: detect mis-issued certificates and hold CAs accountable. We achieve this by randomly cross-logging a subset of certificates into other CT logs. The final increments hold misbehaving CT logs accountable, initially assuming that some logs are benign and then without any such assumption. Given that the current CT deployment lacks strong mechanisms to verify if log operators play by the rules, exposing misbehavior is important for the web in general and not just Tor. The full design turns Tor into a system for maintaining a probabilistically-verified view of the CT log ecosystem available from Tor’s consensus. Each increment leading up to it preserves privacy due to and how we use Tor.
APA, Harvard, Vancouver, ISO, and other styles
6

Victors, Jesse, Ming Li, and Xinwen Fu. "The Onion Name System." Proceedings on Privacy Enhancing Technologies 2017, no. 1 (January 1, 2017): 21–41. http://dx.doi.org/10.1515/popets-2017-0003.

Full text
Abstract:
Abstract Tor onion services, also known as hidden services, are anonymous servers of unknown location and ownership that can be accessed through any Torenabled client. They have gained popularity over the years, but since their introduction in 2002 still suffer from major usability challenges primarily due to their cryptographically-generated non-memorable addresses. In response to this difficulty, in this work we introduce the Onion Name System (OnioNS), a privacy-enhanced decentralized name resolution service. OnioNS allows Tor users to reference an onion service by a meaningful globally-unique verifiable domain name chosen by the onion service administrator.We construct OnioNS as an optional backwards-compatible plugin for Tor, simplify our design and threat model by embedding OnioNS within the Tor network, and provide mechanisms for authenticated denial-of-existence with minimal networking costs. We introduce a lottery-like system to reduce the threat of land rushes and domain squatting. Finally, we provide a security analysis, integrate our software with the Tor Browser, and conduct performance tests of our prototype.
APA, Harvard, Vancouver, ISO, and other styles
7

Hoang, Nguyen Phong, Arian Akhavan Niaki, Phillipa Gill, and Michalis Polychronakis. "Domain name encryption is not enough: privacy leakage via IP-based website fingerprinting." Proceedings on Privacy Enhancing Technologies 2021, no. 4 (July 23, 2021): 420–40. http://dx.doi.org/10.2478/popets-2021-0078.

Full text
Abstract:
Abstract Although the security benefits of domain name encryption technologies such as DNS over TLS (DoT), DNS over HTTPS (DoH), and Encrypted Client Hello (ECH) are clear, their positive impact on user privacy is weakened by—the still exposed—IP address information. However, content delivery networks, DNS-based load balancing, co-hosting of different websites on the same server, and IP address churn, all contribute towards making domain–IP mappings unstable, and prevent straightforward IP-based browsing tracking. In this paper, we show that this instability is not a roadblock (assuming a universal DoT/DoH and ECH deployment), by introducing an IP-based website finger-printing technique that allows a network-level observer to identify at scale the website a user visits. Our technique exploits the complex structure of most websites, which load resources from several domains besides their primary one. Using the generated fingerprints of more than 200K websites studied, we could successfully identify 84% of them when observing solely destination IP addresses. The accuracy rate increases to 92% for popular websites, and 95% for popular and sensitive web-sites. We also evaluated the robustness of the generated fingerprints over time, and demonstrate that they are still effective at successfully identifying about 70% of the tested websites after two months. We conclude by discussing strategies for website owners and hosting providers towards hindering IP-based website fingerprinting and maximizing the privacy benefits offered by DoT/DoH and ECH.
APA, Harvard, Vancouver, ISO, and other styles
8

Hussain, Mohammed Abdulridha, Hai Jin, Zaid Alaa Hussien, Zaid Ameen Abduljabbar, Salah H. Abbdal, and Ayad Ibrahim. "Enc-DNS-HTTP: Utilising DNS Infrastructure to Secure Web Browsing." Security and Communication Networks 2017 (2017): 1–15. http://dx.doi.org/10.1155/2017/9479476.

Full text
Abstract:
Online information security is a major concern for both users and companies, since data transferred via the Internet is becoming increasingly sensitive. The World Wide Web uses Hypertext Transfer Protocol (HTTP) to transfer information and Secure Sockets Layer (SSL) to secure the connection between clients and servers. However, Hypertext Transfer Protocol Secure (HTTPS) is vulnerable to attacks that threaten the privacy of information sent between clients and servers. In this paper, we propose Enc-DNS-HTTP for securing client requests, protecting server responses, and withstanding HTTPS attacks. Enc-DNS-HTTP is based on the distribution of a web server public key, which is transferred via a secure communication between client and a Domain Name System (DNS) server. This key is used to encrypt client-server communication. The scheme is implemented in the C programming language and tested on a Linux platform. In comparison with Apache HTTPS, this scheme is shown to have more effective resistance to attacks and improved performance since it does not involve a high number of time-consuming operations.
APA, Harvard, Vancouver, ISO, and other styles
9

Antic, Djordje, and Mladen Veinovic. "Implementation of DNSSEC-secured name servers for ni.rs zone and best practices." Serbian Journal of Electrical Engineering 13, no. 3 (2016): 369–80. http://dx.doi.org/10.2298/sjee1603369a.

Full text
Abstract:
As a backbone of all communications over the Internet, DNS (Domain Name System) is crucial for all entities that need to be visible and provide services outside their internal networks. Public administration is a prime example for various services that have to be provided to citizens. This manuscript presents one possible approach, implemented in the administration of the City of Nis, for improving the robustness and resilience of external domain space, as well as securing it with DNSSEC (DNS Security Extensions).
APA, Harvard, Vancouver, ISO, and other styles
10

Devos, Koen, Filiep T’jollyn, Peter Desmet, Frederic Piesschaert, and Dimitri Brosens. "Watervogels – Wintering waterbirds in Flanders, Belgium." ZooKeys 915 (February 24, 2020): 127–35. http://dx.doi.org/10.3897/zookeys.915.38265.

Full text
Abstract:
"Watervogels – Wintering waterbirds in Flanders, Belgium" is a sampling event dataset published by the Research Institute for Nature and Forest (INBO). It contains more than 94,000 sampling events (site counts), covering over 710,000 species observations (and zero counts when there is no associated occurrence) and 36 million individual birds for the period 1991–2016. The dataset includes information on 167 different species in nearly 1,100 wetland sites. The aim of these bird counts is to gather information on the size, distribution, and long-term trends of wintering waterbird populations in Flanders. These data are also used to assess the importance of individual sites for waterbirds, using quantitative criteria. Furthermore, the waterbird counts contribute to international monitoring programs, such as the International Waterbird Census (coordinated by Wetlands International) and fulfil some of the objectives of the European Bird Directive, the Ramsar Convention, and the Agreement on the Conservation of African-Eurasian Migratory Waterbirds (AEWA). Here the dataset is published as a standardized Darwin Core Archive and includes for each event: a stable event ID, date and location of observation and a short description of the sampling protocol, effort and conditions (in the event core), supplemented with specific information for each occurrence: a stable occurrence ID, the scientific name and higher classification of the observed species, the number of recorded individuals, and a reference to the observer of the record (in the occurrence extension). Issues with the dataset can be reported at https://github.com/inbo/data-publication/issues. The following information is not included in this dataset and available upon request: roost site counts, counts from historical (inactive) locations and counts from before 1991. We have released this dataset to the public domain under a CC0 1.0 Universal (CC0 1.0) Public Domain Dedication (https://creativecommons.org/publicdomain/zero/1.0/). We would appreciate it if you follow the INBO norms for data use (https://www.inbo.be/en/norms-data-use) when using the data. If you have any questions regarding this dataset, do not hesitate to contact us via the contact information provided in the metadata or via opendata@inbo.be.
APA, Harvard, Vancouver, ISO, and other styles
11

Amplayo, Reinald Kim, Seung-won Hwang, and Min Song. "AutoSense Model for Word Sense Induction." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6212–19. http://dx.doi.org/10.1609/aaai.v33i01.33016212.

Full text
Abstract:
Word sense induction (WSI), or the task of automatically discovering multiple senses or meanings of a word, has three main challenges: domain adaptability, novel sense detection, and sense granularity flexibility. While current latent variable models are known to solve the first two challenges, they are not flexible to different word sense granularities, which differ very much among words, from aardvark with one sense, to play with over 50 senses. Current models either require hyperparameter tuning or nonparametric induction of the number of senses, which we find both to be ineffective. Thus, we aim to eliminate these requirements and solve the sense granularity problem by proposing AutoSense, a latent variable model based on two observations: (1) senses are represented as a distribution over topics, and (2) senses generate pairings between the target word and its neighboring word. These observations alleviate the problem by (a) throwing garbage senses and (b) additionally inducing fine-grained word senses. Results show great improvements over the stateof-the-art models on popular WSI datasets. We also show that AutoSense is able to learn the appropriate sense granularity of a word. Finally, we apply AutoSense to the unsupervised author name disambiguation task where the sense granularity problem is more evident and show that AutoSense is evidently better than competing models. We share our data and code here: https://github.com/rktamplayo/AutoSense.
APA, Harvard, Vancouver, ISO, and other styles
12

Purkait, Swapan. "Examining the effectiveness of phishing filters against DNS based phishing attacks." Information & Computer Security 23, no. 3 (July 13, 2015): 333–46. http://dx.doi.org/10.1108/ics-02-2013-0009.

Full text
Abstract:
Purpose – This paper aims to report on research that tests the effectiveness of anti-phishing tools in detecting phishing attacks by conducting some real-time experiments using freshly hosted phishing sites. Almost all modern-day Web browsers and antivirus programs provide security indicators to mitigate the widespread problem of phishing on the Internet. Design/methodology/approach – The current work examines and evaluates the effectiveness of five popular Web browsers, two third-party phishing toolbar add-ons and seven popular antivirus programs in terms of their capability to detect locally hosted spoofed websites. The same tools have also been tested against fresh phishing sites hosted on Internet. Findings – The experiments yielded alarming results. Although the success rate against live phishing sites was encouraging, only 3 of the 14 tools tested could successfully detect a single spoofed website hosted locally. Originality/value – This work proposes the inclusion of domain name system server authentication and verification of name servers for a visiting website for all future anti-phishing toolbars. It also proposes that a Web browser should maintain a white list of websites that engage in online monetary transactions so that when a user requires to access any of these, the default protocol should always be HTTPS (Hypertext Transfer Protocol Secure), without which a Web browser should prevent the page from loading.
APA, Harvard, Vancouver, ISO, and other styles
13

Dobbins, Nicholas J., Clifford H. Spital, Robert A. Black, Jason M. Morrison, Bas de Veer, Elizabeth Zampino, Robert D. Harrington, et al. "Leaf: an open-source, model-agnostic, data-driven web application for cohort discovery and translational biomedical research." Journal of the American Medical Informatics Association 27, no. 1 (October 8, 2019): 109–18. http://dx.doi.org/10.1093/jamia/ocz165.

Full text
Abstract:
Abstract Objective Academic medical centers and health systems are increasingly challenged with supporting appropriate secondary use of clinical data. Enterprise data warehouses have emerged as central resources for these data, but often require an informatician to extract meaningful information, limiting direct access by end users. To overcome this challenge, we have developed Leaf, a lightweight self-service web application for querying clinical data from heterogeneous data models and sources. Materials and Methods Leaf utilizes a flexible biomedical concept system to define hierarchical concepts and ontologies. Each Leaf concept contains both textual representations and SQL query building blocks, exposed by a simple drag-and-drop user interface. Leaf generates abstract syntax trees which are compiled into dynamic SQL queries. Results Leaf is a successful production-supported tool at the University of Washington, which hosts a central Leaf instance querying an enterprise data warehouse with over 300 active users. Through the support of UW Medicine (https://uwmedicine.org), the Institute of Translational Health Sciences (https://www.iths.org), and the National Center for Data to Health (https://ctsa.ncats.nih.gov/cd2h/), Leaf source code has been released into the public domain at https://github.com/uwrit/leaf. Discussion Leaf allows the querying of single or multiple clinical databases simultaneously, even those of different data models. This enables fast installation without costly extraction or duplication. Conclusions Leaf differs from existing cohort discovery tools because it does not specify a required data model and is designed to seamlessly leverage existing user authentication systems and clinical databases in situ. We believe Leaf to be useful for health system analytics, clinical research data warehouses, precision medicine biobanks, and clinical studies involving large patient cohorts.
APA, Harvard, Vancouver, ISO, and other styles
14

Nakatsuka, Yoshimichi, Andrew Paverd, and Gene Tsudik. "PDoT." Digital Threats: Research and Practice 2, no. 1 (March 2021): 1–22. http://dx.doi.org/10.1145/3431171.

Full text
Abstract:
Security and privacy of the Internet Domain Name System (DNS) have been longstanding concerns. Recently, there is a trend to protect DNS traffic using Transport Layer Security (TLS). However, at least two major issues remain: (1) How do clients authenticate DNS-over-TLS endpoints in a scalable and extensible manner? and (2) How can clients trust endpoints to behave as expected? In this article, we propose a novel Private DNS-over-TLS (PDoT) architecture. PDoT includes a DNS Recursive Resolver (RecRes) that operates within a Trusted Execution Environment. Using Remote Attestation , DNS clients can authenticate and receive strong assurance of trustworthiness of PDoT RecRes. We provide an open source proof-of-concept implementation of PDoT and experimentally demonstrate that its latency and throughput match that of the popular Unbound DNS-over-TLS resolver.
APA, Harvard, Vancouver, ISO, and other styles
15

Lorenz, Christof, Tanja C. Portele, Patrick Laux, and Harald Kunstmann. "Bias-corrected and spatially disaggregated seasonal forecasts: a long-term reference forecast product for the water sector in semi-arid regions." Earth System Science Data 13, no. 6 (June 15, 2021): 2701–22. http://dx.doi.org/10.5194/essd-13-2701-2021.

Full text
Abstract:
Abstract. Seasonal forecasts have the potential to substantially improve water management particularly in water-scarce regions. However, global seasonal forecasts are usually not directly applicable as they are provided at coarse spatial resolutions of at best 36 km and suffer from model biases and drifts. In this study, we therefore apply a bias-correction and spatial-disaggregation (BCSD) approach to seasonal precipitation, temperature and radiation forecasts of the latest long-range seasonal forecasting system SEAS5 of the European Centre for Medium-Range Weather Forecasts (ECMWF). As reference we use data from the ERA5-Land offline land surface rerun of the latest ECMWF reanalysis ERA5. Thereby, we correct for model biases and drifts and improve the spatial resolution from 36 km to 0.1∘. This is performed for example over four predominately semi-arid study domains across the world, which include the river basins of the Karun (Iran), the São Francisco River (Brazil), the Tekeze–Atbara river and Blue Nile (Sudan, Ethiopia and Eritrea), and the Catamayo–Chira river (Ecuador and Peru). Compared against ERA5-Land, the bias-corrected and spatially disaggregated forecasts have a higher spatial resolution and show reduced biases and better agreement of spatial patterns than the raw forecasts as well as remarkably reduced lead-dependent drift effects. But our analysis also shows that computing monthly averages from daily bias-corrected forecasts particularly during periods with strong temporal climate gradients or heteroscedasticity can lead to remaining biases especially in the lowest- and highest-lead forecasts. Our SEAS5 BCSD forecasts cover the whole (re-)forecast period from 1981 to 2019 and include bias-corrected and spatially disaggregated daily and monthly ensemble forecasts for precipitation, average, minimum, and maximum temperature as well as for shortwave radiation from the issue date to the next 215 d and 6 months, respectively. This sums up to more than 100 000 forecasted days for each of the 25 (until the year 2016) and 51 (from the year 2017) ensemble members and each of the five analyzed variables. The full repository is made freely available to the public via the World Data Centre for Climate at https://doi.org/10.26050/WDCC/SaWaM_D01_SEAS5_BCSD (Domain D01, Karun Basin (Iran), Lorenz et al., 2020b), https://doi.org/10.26050/WDCC/SaWaM_D02_SEAS5_BCSD (Domain D02: São Francisco Basin (Brazil), Lorenz et al., 2020c), https://doi.org/10.26050/WDCC/SaWaM_D03_SEAS5_BCSD (Domain D03: basins of the Tekeze–Atbara and Blue Nile (Ethiopia, Eritrea, Sudan), Lorenz et al., 2020d), and https://doi.org/10.26050/WDCC/SaWaM_D04_SEAS5_BCSD (Domain D04: Catamayo–Chira Basin (Ecuador, Peru), Lorenz et al., 2020a). It is currently the first publicly available daily high-resolution seasonal forecast product that covers multiple regions and variables for such a long period. It hence provides a unique test bed for evaluating the performance of seasonal forecasts over semi-arid regions and as driving data for hydrological, ecosystem or climate impact models. Therefore, our forecasts provide a crucial contribution for the disaster preparedness and, finally, climate proofing of the regional water management in climatically sensitive regions.
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Pengyuan, Xiangying Jiang, and Hagit Shatkay. "Figure and caption extraction from biomedical documents." Bioinformatics 35, no. 21 (April 5, 2019): 4381–88. http://dx.doi.org/10.1093/bioinformatics/btz228.

Full text
Abstract:
Abstract Motivation Figures and captions convey essential information in biomedical documents. As such, there is a growing interest in mining published biomedical figures and in utilizing their respective captions as a source of knowledge. Notably, an essential step underlying such mining is the extraction of figures and captions from publications. While several PDF parsing tools that extract information from such documents are publicly available, they attempt to identify images by analyzing the PDF encoding and structure and the complex graphical objects embedded within. As such, they often incorrectly identify figures and captions in scientific publications, whose structure is often non-trivial. The extraction of figures, captions and figure-caption pairs from biomedical publications is thus neither well-studied nor yet well-addressed. Results We introduce a new and effective system for figure and caption extraction, PDFigCapX. Unlike existing methods, we first separate between text and graphical contents, and then utilize layout information to effectively detect and extract figures and captions. We generate files containing the figures and their associated captions and provide those as output to the end-user. We test our system both over a public dataset of computer science documents previously used by others, and over two newly collected sets of publications focusing on the biomedical domain. Our experiments and results comparing PDFigCapX to other state-of-the-art systems show a significant improvement in performance, and demonstrate the effectiveness and robustness of our approach. Availability and implementation Our system is publicly available for use at: https://www.eecis.udel.edu/~compbio/PDFigCapX. The two new datasets are available at: https://www.eecis.udel.edu/~compbio/PDFigCapX/Downloads
APA, Harvard, Vancouver, ISO, and other styles
17

Díaz-Sánchez, Daniel, Andrés Marín-Lopez, Florina Almenárez Mendoza, and Patricia Arias Cabarcos. "DNS/DANE Collision-Based Distributed and Dynamic Authentication for Microservices in IoT †." Sensors 19, no. 15 (July 26, 2019): 3292. http://dx.doi.org/10.3390/s19153292.

Full text
Abstract:
IoT devices provide real-time data to a rich ecosystem of services and applications. The volume of data and the involved subscribe/notify signaling will likely become a challenge also for access and core networks. To alleviate the core of the network, other technologies like fog computing can be used. On the security side, designers of IoT low-cost devices and applications often reuse old versions of development frameworks and software components that contain vulnerabilities. Many server applications today are designed using microservice architectures where components are easier to update. Thus, IoT can benefit from deploying microservices in the fog as it offers the required flexibility for the main players of ubiquitous computing: nomadic users. In such deployments, IoT devices need the dynamic instantiation of microservices. IoT microservices require certificates so they can be accessed securely. Thus, every microservice instance may require a newly-created domain name and a certificate. The DNS-based Authentication of Named Entities (DANE) extension to Domain Name System Security Extensions (DNSSEC) allows linking a certificate to a given domain name. Thus, the combination of DNSSEC and DANE provides microservices’ clients with secure information regarding the domain name, IP address, and server certificate of a given microservice. However, IoT microservices may be short-lived since devices can move from one local fog to another, forcing DNSSEC servers to sign zones whenever new changes occur. Considering DNSSEC and DANE were designed to cope with static services, coping with IoT dynamic microservice instantiation can throttle the scalability in the fog. To overcome this limitation, this article proposes a solution that modifies the DNSSEC/DANE signature mechanism using chameleon signatures and defining a new soft delegation scheme. Chameleon signatures are signatures computed over a chameleon hash, which have a property: a secret trapdoor function can be used to compute collisions to the hash. Since the hash is maintained, the signature does not have to be computed again. In the soft delegation schema, DNS servers obtain a trapdoor that allows performing changes in a constrained zone without affecting normal DNS operation. In this way, a server can receive this soft delegation and modify the DNS zone to cope with frequent changes such as microservice dynamic instantiation. Changes in the soft delegated zone are much faster and do not require the intervention of the DNS primary servers of the zone.
APA, Harvard, Vancouver, ISO, and other styles
18

Kadhim, Huda Yousif, Karim Hashim Al-saedi, and Mustafa Dhiaa Al-Hassani. "Mobile Phishing Websites Detection and Prevention Using Data Mining Techniques." International Journal of Interactive Mobile Technologies (iJIM) 13, no. 10 (September 25, 2019): 205. http://dx.doi.org/10.3991/ijim.v13i10.10797.

Full text
Abstract:
<strong>Abstract</strong>— The widespread use of smart phones nowadays makes them vulnerable to phishing. Phishing is the process of trying to steal user information over the Internet by claiming they are a trusted entity and thus access and steal the victim's data(user name, password and credit card details). Consequently, the need for mobile phishing detection system has become an urgent need. And this is what we are attempting to introduce in this paper, where we introduce a system to detect phishing websites on Android phones. That predicts and prevents phishing websites from deceiving users, utilizing data mining techniques to predict whether a website is phishing or not, relying on a set of factors (URL based features, HTML based features and Domain based features). The results shows system effectiveness in predicting phishing websites with 97% as prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
19

Kim, Myoung Hwa, Young Chul Yoo, Sun Joon Bai, Kang-Young Lee, Nayeon Kim, and Ki Young Lee. "Physiologic and hemodynamic changes in patients undergoing open abdominal cytoreductive surgery with hyperthermic intraperitoneal chemotherapy." Journal of International Medical Research 49, no. 1 (January 2021): 030006052098326. http://dx.doi.org/10.1177/0300060520983263.

Full text
Abstract:
Objective We aimed to determine the physiological and hemodynamic changes in patients who were undergoing hyperthermic intraperitoneal chemotherapy (HIPEC) cytoreductive surgeries. Methods This prospective, observational study enrolled 21 patients who were undergoing elective cytoreductive surgery with HIPEC at our hospital over 2 years. We collected vital signs, hemodynamic parameters including global end-diastolic volume index (GEVI) and extravascular lung water index (ELWI) using the VolumeView™ system, and arterial blood gas analysis from all patients. Data were recorded before skin incision (T1); 30 minutes before HIPEC initiation (T2); 30 (T3), 60 (T4), and 90 (T5) minutes after HIPEC initiation; 30 minutes after HIPEC completion (T6); and 10 minutes before surgery completion (T7). Results Patients showed an increase in body temperature and cardiac index and a decrease in the systemic vascular resistance index. GEDI was 715.4 (T1) to 809.7 (T6), and ELWI was 6.9 (T1) to 7.3 (T5). Conclusions HIPEC increased patients’ body temperature and cardiac output and decreased systemic vascular resistance. Although parameters that were extracted from the VolumeView™ system were within their normal ranges, transpulmonary thermodilution approach is helpful in intraoperative hemodynamic management during open abdominal cytoreductive surgery with HIPEC. Trial registry name: ClinicalTrials.gov Trial registration number: NCT02325648 URL: https://clinicaltrials.gov/ct2/results?cond=NCT02325648&term
APA, Harvard, Vancouver, ISO, and other styles
20

Stvan, Laurel Smith. "The contingent meaning of –ex brand names in English." Corpora 1, no. 2 (November 2006): 217–50. http://dx.doi.org/10.3366/cor.2006.1.2.217.

Full text
Abstract:
The –ex string found in English product and company names (e.g., Kleenex, Timex and Virex), is investigated to discover whether this ending has consistent meaning across coined words and to observe any constraints on its attachment and interpretation. Seven hundred and ninety-three –exbrand name types were collected and examined, derived from American English texts in the Brown and Frown corpora as well as over 600 submissions to the US Patent and Trademark Office's Trademark Electronic Search System database (TESS); American native English speakers were also surveyed to assess interpretations of –ex meaning in brands. Analysis of these coined terms reveals that –ex meaning is contingent, reflecting assumptions by a given speaker of a referent's domain in a given time, region and culture. Yet, despite ambiguities in its interpretation, the –ex form shows increasing use.
APA, Harvard, Vancouver, ISO, and other styles
21

Ivanov, Ievgen, Artur Korniłowicz, and Mykola Nikitchenko. "On an Algorithmic Algebra over Simple-Named Complex-Valued Nominative Data." Formalized Mathematics 26, no. 2 (July 1, 2018): 149–58. http://dx.doi.org/10.2478/forma-2018-0012.

Full text
Abstract:
Summary This paper continues formalization in the Mizar system [2, 1] of basic notions of the composition-nominative approach to program semantics [14] which was started in [8, 12, 10]. The composition-nominative approach studies mathematical models of computer programs and data on various levels of abstraction and generality and provides tools for reasoning about their properties. In particular, data in computer systems are modeled as nominative data [15]. Besides formalization of semantics of programs, certain elements of the composition-nominative approach were applied to abstract systems in a mathematical systems theory [4, 6, 7, 5, 3]. In the paper we give a formal definition of the notions of a binominative function over given sets of names and values (i.e. a partial function which maps simple-named complex-valued nominative data to such data) and a nominative predicate (a partial predicate on simple-named complex-valued nominative data). The sets of such binominative functions and nominative predicates form the carrier of the generalized Glushkov algorithmic algebra for simple-named complex-valued nominative data [15]. This algebra can be used to formalize algorithms which operate on various data structures (such as multidimensional arrays, lists, etc.) and reason about their properties. In particular, we formalize the operations of this algebra which require a specification of a data domain and which include the existential quantifier, the assignment composition, the composition of superposition into a predicate, the composition of superposition into a binominative function, the name checking predicate. The details on formalization of nominative data and the operations of the algorithmic algebra over them are described in [11, 13, 9].
APA, Harvard, Vancouver, ISO, and other styles
22

Al-Nawasrah, Ahmad, Ammar Ali Almomani, Samer Atawneh, and Mohammad Alauthman. "A Survey of Fast Flux Botnet Detection With Fast Flux Cloud Computing." International Journal of Cloud Applications and Computing 10, no. 3 (July 2020): 17–53. http://dx.doi.org/10.4018/ijcac.2020070102.

Full text
Abstract:
A botnet refers to a set of compromised machines controlled distantly by an attacker. Botnets are considered the basis of numerous security threats around the world. Command and control (C&C) servers are the backbone of botnet communications, in which bots send a report to the botmaster, and the latter sends attack orders to those bots. Botnets are also categorized according to their C&C protocols, such as internet relay chat (IRC) and peer-to-peer (P2P) botnets. A domain name system (DNS) method known as fast-flux is used by bot herders to cover malicious botnet activities and increase the lifetime of malicious servers by quickly changing the IP addresses of the domain names over time. Several methods have been suggested to detect fast-flux domains. However, these methods achieve low detection accuracy, especially for zero-day domains. They also entail a significantly long detection time and consume high memory storage. In this survey, we present an overview of the various techniques used to detect fast-flux domains according to solution scopes, namely, host-based, router-based, DNS-based, and cloud computing techniques. This survey provides an understanding of the problem, its current solution space, and the future research directions expected.
APA, Harvard, Vancouver, ISO, and other styles
23

Papadopoulos, Pavlos, Nikolaos Pitropakis, William J. Buchanan, Owen Lo, and Sokratis Katsikas. "Privacy-Preserving Passive DNS." Computers 9, no. 3 (August 12, 2020): 64. http://dx.doi.org/10.3390/computers9030064.

Full text
Abstract:
The Domain Name System (DNS) was created to resolve the IP addresses of web servers to easily remembered names. When it was initially created, security was not a major concern; nowadays, this lack of inherent security and trust has exposed the global DNS infrastructure to malicious actors. The passive DNS data collection process creates a database containing various DNS data elements, some of which are personal and need to be protected to preserve the privacy of the end users. To this end, we propose the use of distributed ledger technology. We use Hyperledger Fabric to create a permissioned blockchain, which only authorized entities can access. The proposed solution supports queries for storing and retrieving data from the blockchain ledger, allowing the use of the passive DNS database for further analysis, e.g., for the identification of malicious domain names. Additionally, it effectively protects the DNS personal data from unauthorized entities, including the administrators that can act as potential malicious insiders, and allows only the data owners to perform queries over these data. We evaluated our proposed solution by creating a proof-of-concept experimental setup that passively collects DNS data from a network and then uses the distributed ledger technology to store the data in an immutable ledger, thus providing a full historical overview of all the records.
APA, Harvard, Vancouver, ISO, and other styles
24

Dahlquist, Kam D., John David N. Dionisio, Ben G. Fitzpatrick, Nicole A. Anguiano, Anindita Varshneya, Britain J. Southwick, and Mihir Samdarshi. "GRNsight: a web application and service for visualizing models of small- to medium-scale gene regulatory networks." PeerJ Computer Science 2 (September 12, 2016): e85. http://dx.doi.org/10.7717/peerj-cs.85.

Full text
Abstract:
GRNsight is a web application and service for visualizing models of gene regulatory networks (GRNs). A gene regulatory network (GRN) consists of genes, transcription factors, and the regulatory connections between them which govern the level of expression of mRNA and protein from genes. The original motivation came from our efforts to perform parameter estimation and forward simulation of the dynamics of a differential equations model of a small GRN with 21 nodes and 31 edges. We wanted a quick and easy way to visualize the weight parameters from the model which represent the direction and magnitude of the influence of a transcription factor on its target gene, so we created GRNsight. GRNsight automatically lays out either an unweighted or weighted network graph based on an Excel spreadsheet containing an adjacency matrix where regulators are named in the columns and target genes in the rows, a Simple Interaction Format (SIF) text file, or a GraphML XML file. When a user uploads an input file specifying an unweighted network, GRNsight automatically lays out the graph using black lines and pointed arrowheads. For a weighted network, GRNsight uses pointed and blunt arrowheads, and colors the edges and adjusts their thicknesses based on the sign (positive for activation or negative for repression) and magnitude of the weight parameter. GRNsight is written in JavaScript, with diagrams facilitated by D3.js, a data visualization library. Node.js and the Express framework handle server-side functions. GRNsight’s diagrams are based on D3.js’s force graph layout algorithm, which was then extensively customized to support the specific needs of GRNs. Nodes are rectangular and support gene labels of up to 12 characters. The edges are arcs, which become straight lines when the nodes are close together. Self-regulatory edges are indicated by a loop. When a user mouses over an edge, the numerical value of the weight parameter is displayed. Visualizations can be modified by sliders that adjust the force graph layout parameters and through manual node dragging. GRNsight is best-suited for visualizing networks of fewer than 35 nodes and 70 edges, although it accepts networks of up to 75 nodes or 150 edges. GRNsight has general applicability for displaying any small, unweighted or weighted network with directed edges for systems biology or other application domains. GRNsight serves as an example of following and teaching best practices for scientific computing and complying with FAIR principles, using an open and test-driven development model with rigorous documentation of requirements and issues on GitHub. An exhaustive unit testing framework using Mocha and the Chai assertion library consists of around 160 automated unit tests that examine nearly 530 test files to ensure that the program is running as expected. The GRNsight application (http://dondi.github.io/GRNsight/) and code (https://github.com/dondi/GRNsight) are available under the open source BSD license.
APA, Harvard, Vancouver, ISO, and other styles
25

Wen, Zhi, Pratheeksha Nair, Chih-Ying Deng, Xing Han Lu, Edward Moseley, Naomi George, Charlotta Lindvall, and Yue Li. "Mining heterogeneous clinical notes by multi-modal latent topic model." PLOS ONE 16, no. 4 (April 8, 2021): e0249622. http://dx.doi.org/10.1371/journal.pone.0249622.

Full text
Abstract:
Latent knowledge can be extracted from the electronic notes that are recorded during patient encounters with the health system. Using these clinical notes to decipher a patient’s underlying comorbidites, symptom burdens, and treatment courses is an ongoing challenge. Latent topic model as an efficient Bayesian method can be used to model each patient’s clinical notes as “documents” and the words in the notes as “tokens”. However, standard latent topic models assume that all of the notes follow the same topic distribution, regardless of the type of note or the domain expertise of the author (such as doctors or nurses). We propose a novel application of latent topic modeling, using multi-note topic model (MNTM) to jointly infer distinct topic distributions of notes of different types. We applied our model to clinical notes from the MIMIC-III dataset to infer distinct topic distributions over the physician and nursing note types. Based on manual assessments made by clinicians, we observed a significant improvement in topic interpretability using MNTM modeling over the baseline single-note topic models that ignore the note types. Moreover, our MNTM model led to a significantly higher prediction accuracy for prolonged mechanical ventilation and mortality using only the first 48 hours of patient data. By correlating the patients’ topic mixture with hospital mortality and prolonged mechanical ventilation, we identified several diagnostic topics that are associated with poor outcomes. Because of its elegant and intuitive formation, we envision a broad application of our approach in mining multi-modality text-based healthcare information that goes beyond clinical notes. Code available at https://github.com/li-lab-mcgill/heterogeneous_ehr.
APA, Harvard, Vancouver, ISO, and other styles
26

Xing, Fei, Yi Ping Yao, Zhi Wen Jiang, and Bing Wang. "Fine-Grained Parallel and Distributed Spatial Stochastic Simulation of Biological Reactions." Advanced Materials Research 345 (September 2011): 104–12. http://dx.doi.org/10.4028/www.scientific.net/amr.345.104.

Full text
Abstract:
To date, discrete event stochastic simulations of large scale biological reaction systems are extremely compute-intensive and time-consuming. Besides, it has been widely accepted that spatial factor plays a critical role in the dynamics of most biological reaction systems. The NSM (the Next Sub-Volume Method), a spatial variation of the Gillespie’s stochastic simulation algorithm (SSA), has been proposed for spatially stochastic simulation of those systems. While being able to explore high degree of parallelism in systems, NSM is inherently sequential, which still suffers from the problem of low simulation speed. Fine-grained parallel execution is an elegant way to speed up sequential simulations. Thus, based on the discrete event simulation framework JAMES II, we design and implement a PDES (Parallel Discrete Event Simulation) TW (time warp) simulator to enable the fine-grained parallel execution of spatial stochastic simulations of biological reaction systems using the ANSM (the Abstract NSM), a parallel variation of the NSM. The simulation results of classical Lotka-Volterra biological reaction system show that our time warp simulator obtains remarkable parallel speed-up against sequential execution of the NSM.I.IntroductionThe goal of Systems biology is to obtain system-level investigations of the structure and behavior of biological reaction systems by integrating biology with system theory, mathematics and computer science [1][3], since the isolated knowledge of parts can not explain the dynamics of a whole system. As the complement of “wet-lab” experiments, stochastic simulation, being called the “dry-computational” experiment, plays a more and more important role in computing systems biology [2]. Among many methods explored in systems biology, discrete event stochastic simulation is of greatly importance [4][5][6], since a great number of researches have present that stochasticity or “noise” have a crucial effect on the dynamics of small population biological reaction systems [4][7]. Furthermore, recent research shows that the stochasticity is not only important in biological reaction systems with small population but also in some moderate/large population systems [7].To date, Gillespie’s SSA [8] is widely considered to be the most accurate way to capture the dynamics of biological reaction systems instead of traditional mathematical method [5][9]. However, SSA-based stochastic simulation is confronted with two main challenges: Firstly, this type of simulation is extremely time-consuming, since when the types of species and the number of reactions in the biological system are large, SSA requires a huge amount of steps to sample these reactions; Secondly, the assumption that the systems are spatially homogeneous or well-stirred is hardly met in most real biological systems and spatial factors play a key role in the behaviors of most real biological systems [19][20][21][22][23][24]. The next sub-volume method (NSM) [18], presents us an elegant way to access the special problem via domain partition. To our disappointment, sequential stochastic simulation with the NSM is still very time-consuming, and additionally introduced diffusion among neighbor sub-volumes makes things worse. Whereas, the NSM explores a very high degree of parallelism among sub-volumes, and parallelization has been widely accepted as the most meaningful way to tackle the performance bottleneck of sequential simulations [26][27]. Thus, adapting parallel discrete event simulation (PDES) techniques to discrete event stochastic simulation would be particularly promising. Although there are a few attempts have been conducted [29][30][31], research in this filed is still in its infancy and many issues are in need of further discussion. The next section of the paper presents the background and related work in this domain. In section III, we give the details of design and implementation of model interfaces of LP paradigm and the time warp simulator based on the discrete event simulation framework JAMES II; the benchmark model and experiment results are shown in Section IV; in the last section, we conclude the paper with some future work.II. Background and Related WorkA. Parallel Discrete Event Simulation (PDES)The notion Logical Process (LP) is introduced to PDES as the abstract of the physical process [26], where a system consisting of many physical processes is usually modeled by a set of LP. LP is regarded as the smallest unit that can be executed in PDES and each LP holds a sub-partition of the whole system’s state variables as its private ones. When a LP processes an event, it can only modify the state variables of its own. If one LP needs to modify one of its neighbors’ state variables, it has to schedule an event to the target neighbor. That is to say event message exchanging is the only way that LPs interact with each other. Because of the data dependences or interactions among LPs, synchronization protocols have to be introduced to PDES to guarantee the so-called local causality constraint (LCC) [26]. By now, there are a larger number of synchronization algorithms have been proposed, e.g. the null-message [26], the time warp (TW) [32], breath time warp (BTW) [33] and etc. According to whether can events of LPs be processed optimistically, they are generally divided into two types: conservative algorithms and optimistic algorithms. However, Dematté and Mazza have theoretically pointed out the disadvantages of pure conservative parallel simulation for biochemical reaction systems [31]. B. NSM and ANSM The NSM is a spatial variation of Gillespie’ SSA, which integrates the direct method (DM) [8] with the next reaction method (NRM) [25]. The NSM presents us a pretty good way to tackle the aspect of space in biological systems by partitioning a spatially inhomogeneous system into many much more smaller “homogeneous” ones, which can be simulated by SSA separately. However, the NSM is inherently combined with the sequential semantics, and all sub-volumes share one common data structure for events or messages. Thus, directly parallelization of the NSM may be confronted with the so-called boundary problem and high costs of synchronously accessing the common data structure [29]. In order to obtain higher efficiency of parallel simulation, parallelization of NSM has to firstly free the NSM from the sequential semantics and secondly partition the shared data structure into many “parallel” ones. One of these is the abstract next sub-volume method (ANSM) [30]. In the ANSM, each sub-volume is modeled by a logical process (LP) based on the LP paradigm of PDES, where each LP held its own event queue and state variables (see Fig. 1). In addition, the so-called retraction mechanism was introduced in the ANSM too (see algorithm 1). Besides, based on the ANSM, Wang etc. [30] have experimentally tested the performance of several PDES algorithms in the platform called YH-SUPE [27]. However, their platform is designed for general simulation applications, thus it would sacrifice some performance for being not able to take into account the characteristics of biological reaction systems. Using the similar ideas of the ANSM, Dematté and Mazza have designed and realized an optimistic simulator. However, they processed events in time-stepped manner, which would lose a specific degree of precisions compared with the discrete event manner, and it is very hard to transfer a time-stepped simulation to a discrete event one. In addition, Jeschke etc.[29] have designed and implemented a dynamic time-window simulator to execution the NSM in parallel on the grid computing environment, however, they paid main attention on the analysis of communication costs and determining a better size of the time-window.Fig. 1: the variations from SSA to NSM and from NSM to ANSMC. JAMES II JAMES II is an open source discrete event simulation experiment framework developed by the University of Rostock in Germany. It focuses on high flexibility and scalability [11][13]. Based on the plug-in scheme [12], each function of JAMES II is defined as a specific plug-in type, and all plug-in types and plug-ins are declared in XML-files [13]. Combined with the factory method pattern JAMES II innovatively split up the model and simulator, which makes JAMES II is very flexible to add and reuse both of models and simulators. In addition, JAMES II supports various types of modelling formalisms, e.g. cellular automata, discrete event system specification (DEVS), SpacePi, StochasticPi and etc.[14]. Besides, a well-defined simulator selection mechanism is designed and developed in JAMES II, which can not only automatically choose the proper simulators according to the modeling formalism but also pick out a specific simulator from a serious of simulators supporting the same modeling formalism according to the user settings [15].III. The Model Interface and SimulatorAs we have mentioned in section II (part C), model and simulator are split up into two separate parts. Thus, in this section, we introduce the designation and implementation of model interface of LP paradigm and more importantly the time warp simulator.A. The Mod Interface of LP ParadigmJAMES II provides abstract model interfaces for different modeling formalism, based on which Wang etc. have designed and implemented model interface of LP paradigm[16]. However, this interface is not scalable well for parallel and distributed simulation of larger scale systems. In our implementation, we accommodate the interface to the situation of parallel and distributed situations. Firstly, the neighbor LP’s reference is replaced by its name in LP’s neighbor queue, because it is improper even dangerous that a local LP hold the references of other LPs in remote memory space. In addition, (pseudo-)random number plays a crucial role to obtain valid and meaningful results in stochastic simulations. However, it is still a very challenge work to find a good random number generator (RNG) [34]. Thus, in order to focus on our problems, we introduce one of the uniform RNGs of JAMES II to this model interface, where each LP holds a private RNG so that random number streams of different LPs can be independent stochastically. B. The Time Warp SimulatorBased on the simulator interface provided by JAMES II, we design and implement the time warp simulator, which contains the (master-)simulator, (LP-)simulator. The simulator works strictly as master/worker(s) paradigm for fine-grained parallel and distributed stochastic simulations. Communication costs are crucial to the performance of a fine-grained parallel and distributed simulation. Based on the Java remote method invocation (RMI) mechanism, P2P (peer-to-peer) communication is implemented among all (master-and LP-)simulators, where a simulator holds all the proxies of targeted ones that work on remote workers. One of the advantages of this communication approach is that PDES codes can be transferred to various hardwire environment, such as Clusters, Grids and distributed computing environment, with only a little modification; The other is that RMI mechanism is easy to realized and independent to any other non-Java libraries. Since the straggler event problem, states have to be saved to rollback events that are pre-processed optimistically. Each time being modified, the state is cloned to a queue by Java clone mechanism. Problem of this copy state saving approach is that it would cause loads of memory space. However, the problem can be made up by a condign GVT calculating mechanism. GVT reduction scheme also has a significant impact on the performance of parallel simulators, since it marks the highest time boundary of events that can be committed so that memories of fossils (processed events and states) less than GVT can be reallocated. GVT calculating is a very knotty for the notorious simultaneous reporting problem and transient messages problem. According to our problem, another GVT algorithm, called Twice Notification (TN-GVT) (see algorithm 2), is contributed to this already rich repository instead of implementing one of GVT algorithms in reference [26] and [28].This algorithm looks like the synchronous algorithm described in reference [26] (pp. 114), however, they are essentially different from each other. This algorithm has never stopped the simulators from processing events when GVT reduction, while algorithm in reference [26] blocks all simulators for GVT calculating. As for the transient message problem, it can be neglect in our implementation, because RMI based remote communication approach is synchronized, that means a simulator will not go on its processing until the remote the massage get to its destination. And because of this, the high-costs message acknowledgement, prevalent over many classical asynchronous GVT algorithms, is not needed anymore too, which should be constructive to the whole performance of the time warp simulator.IV. Benchmark Model and Experiment ResultsA. The Lotka-Volterra Predator-prey SystemIn our experiment, the spatial version of Lotka-Volterra predator-prey system is introduced as the benchmark model (see Fig. 2). We choose the system for two considerations: 1) this system is a classical experimental model that has been used in many related researches [8][30][31], so it is credible and the simulation results are comparable; 2) it is simple but helpful enough to test the issues we are interested in. The space of predator-prey System is partitioned into a2D NXNgrid, whereNdenotes the edge size of the grid. Initially the population of the Grass, Preys and Predators are set to 1000 in each single sub-volume (LP). In Fig. 2,r1,r2,r3stand for the reaction constants of the reaction 1, 2 and 3 respectively. We usedGrass,dPreyanddPredatorto stand for the diffusion rate of Grass, Prey and Predator separately. Being similar to reference [8], we also take the assumption that the population of the grass remains stable, and thusdGrassis set to zero.R1:Grass + Prey ->2Prey(1)R2:Predator +Prey -> 2Predator(2)R3:Predator -> NULL(3)r1=0.01; r2=0.01; r3=10(4)dGrass=0.0;dPrey=2.5;dPredato=5.0(5)Fig. 2: predator-prey systemB. Experiment ResultsThe simulation runs have been executed on a Linux Cluster with 40 computing nodes. Each computing node is equipped with two 64bit 2.53 GHz Intel Xeon QuadCore Processors with 24GB RAM, and nodes are interconnected with Gigabit Ethernet connection. The operating system is Kylin Server 3.5, with kernel 2.6.18. Experiments have been conducted on the benchmark model of different size of mode to investigate the execution time and speedup of the time warp simulator. As shown in Fig. 3, the execution time of simulation on single processor with 8 cores is compared. The result shows that it will take more wall clock time to simulate much larger scale systems for the same simulation time. This testifies the fact that larger scale systems will leads to more events in the same time interval. More importantly, the blue line shows that the sequential simulation performance declines very fast when the mode scale becomes large. The bottleneck of sequential simulator is due to the costs of accessing a long event queue to choose the next events. Besides, from the comparison between group 1 and group 2 in this experiment, we could also conclude that high diffusion rate increased the simulation time greatly both in sequential and parallel simulations. This is because LP paradigm has to split diffusion into two processes (diffusion (in) and diffusion (out) event) for two interactive LPs involved in diffusion and high diffusion rate will lead to high proportional of diffusion to reaction. In the second step shown in Fig. 4, the relationship between the speedups from time warp of two different model sizes and the number of work cores involved are demonstrated. The speedup is calculated against the sequential execution of the spatial reaction-diffusion systems model with the same model size and parameters using NSM.Fig. 4 shows the comparison of speedup of time warp on a64X64grid and a100X100grid. In the case of a64X64grid, under the condition that only one node is used, the lowest speedup (a little bigger than 1) is achieved when two cores involved, and the highest speedup (about 6) is achieved when 8 cores involved. The influence of the number of cores used in parallel simulation is investigated. In most cases, large number of cores could bring in considerable improvements in the performance of parallel simulation. Also, compared with the two results in Fig. 4, the simulation of larger model achieves better speedup. Combined with time tests (Fig. 3), we find that sequential simulator’s performance declines sharply when the model scale becomes very large, which makes the time warp simulator get better speed-up correspondingly.Fig. 3: Execution time (wall clock time) of Seq. and time warp with respect to different model sizes (N=32, 64, 100, and 128) and model parameters based on single computing node with 8 cores. Results of the test are grouped by the diffusion rates (Group 1: Sequential 1 and Time Warp 1. dPrey=2.5, dPredator=5.0; Group 2: dPrey=0.25, dPredator=0.5, Sequential 2 and Time Warp 2).Fig. 4: Speedup of time warp with respect to the number of work cores and the model size (N=64 and 100). Work cores are chose from one computing node. Diffusion rates are dPrey=2.5, dPredator=5.0 and dGrass=0.0.V. Conclusion and Future WorkIn this paper, a time warp simulator based on the discrete event simulation framework JAMES II is designed and implemented for fine-grained parallel and distributed discrete event spatial stochastic simulation of biological reaction systems. Several challenges have been overcome, such as state saving, roll back and especially GVT reduction in parallel execution of simulations. The Lotka-Volterra Predator-Prey system is chosen as the benchmark model to test the performance of our time warp simulator and the best experiment results show that it can obtain about 6 times of speed-up against the sequential simulation. The domain this paper concerns with is in the infancy, many interesting issues are worthy of further investigated, e.g. there are many excellent PDES optimistic synchronization algorithms (e.g. the BTW) as well. Next step, we would like to fill some of them into JAMES II. In addition, Gillespie approximation methods (tau-leap[10] etc.) sacrifice some degree of precision for higher simulation speed, but still could not address the aspect of space of biological reaction systems. The combination of spatial element and approximation methods would be very interesting and promising; however, the parallel execution of tau-leap methods should have to overcome many obstacles on the road ahead.AcknowledgmentThis work is supported by the National Natural Science Foundation of China (NSF) Grant (No.60773019) and the Ph.D. Programs Foundation of Ministry of Education of China (No. 200899980004). The authors would like to show their great gratitude to Dr. Jan Himmelspach and Dr. Roland Ewald at the University of Rostock, Germany for their invaluable advice and kindly help with JAMES II.ReferencesH. Kitano, "Computational systems biology." Nature, vol. 420, no. 6912, pp. 206-210, November 2002.H. Kitano, "Systems biology: a brief overview." Science (New York, N.Y.), vol. 295, no. 5560, pp. 1662-1664, March 2002.A. Aderem, "Systems biology: Its practice and challenges," Cell, vol. 121, no. 4, pp. 511-513, May 2005. [Online]. Available: http://dx.doi.org/10.1016/j.cell.2005.04.020.H. de Jong, "Modeling and simulation of genetic regulatory systems: A literature review," Journal of Computational Biology, vol. 9, no. 1, pp. 67-103, January 2002.C. W. Gardiner, Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences (Springer Series in Synergetics), 3rd ed. Springer, April 2004.D. T. Gillespie, "Simulation methods in systems biology," in Formal Methods for Computational Systems Biology, ser. Lecture Notes in Computer Science, M. Bernardo, P. Degano, and G. Zavattaro, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5016, ch. 5, pp. 125-167.Y. Tao, Y. Jia, and G. T. Dewey, "Stochastic fluctuations in gene expression far from equilibrium: Omega expansion and linear noise approximation," The Journal of Chemical Physics, vol. 122, no. 12, 2005.D. T. Gillespie, "Exact stochastic simulation of coupled chemical reactions," Journal of Physical Chemistry, vol. 81, no. 25, pp. 2340-2361, December 1977.D. T. Gillespie, "Stochastic simulation of chemical kinetics," Annual Review of Physical Chemistry, vol. 58, no. 1, pp. 35-55, 2007.D. T. Gillespie, "Approximate accelerated stochastic simulation of chemically reacting systems," The Journal of Chemical Physics, vol. 115, no. 4, pp. 1716-1733, 2001.J. Himmelspach, R. Ewald, and A. M. Uhrmacher, "A flexible and scalable experimentation layer," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 827-835.J. Himmelspach and A. M. Uhrmacher, "Plug'n simulate," in 40th Annual Simulation Symposium (ANSS'07). Washington, DC, USA: IEEE, March 2007, pp. 137-143.R. Ewald, J. Himmelspach, M. Jeschke, S. Leye, and A. M. Uhrmacher, "Flexible experimentation in the modeling and simulation framework james ii-implications for computational systems biology," Brief Bioinform, vol. 11, no. 3, pp. bbp067-300, January 2010.A. Uhrmacher, J. Himmelspach, M. Jeschke, M. John, S. Leye, C. Maus, M. Röhl, and R. Ewald, "One modelling formalism & simulator is not enough! a perspective for computational biology based on james ii," in Formal Methods in Systems Biology, ser. Lecture Notes in Computer Science, J. Fisher, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5054, ch. 9, pp. 123-138. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-68413-8_9.R. Ewald, J. Himmelspach, and A. M. Uhrmacher, "An algorithm selection approach for simulation systems," pads, vol. 0, pp. 91-98, 2008.Bing Wang, Jan Himmelspach, Roland Ewald, Yiping Yao, and Adelinde M Uhrmacher. Experimental analysis of logical process simulation algorithms in james ii[C]// In M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, editors, Proceedings of the Winter Simulation Conference, IEEE Computer Science, 2009. 1167-1179.Ewald, J. Rössel, J. Himmelspach, and A. M. Uhrmacher, "A plug-in-based architecture for random number generation in simulation systems," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 836-844.J. Elf and M. Ehrenberg, "Spontaneous separation of bi-stable biochemical systems into spatial domains of opposite phases." Systems biology, vol. 1, no. 2, pp. 230-236, December 2004.K. Takahashi, S. Arjunan, and M. Tomita, "Space in systems biology of signaling pathways? Towards intracellular molecular crowding in silico," FEBS Letters, vol. 579, no. 8, pp. 1783-1788, March 2005.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.D. Ridgway, G. Broderick, and M. Ellison, "Accommodating space, time and randomness in network simulation," Current Opinion in Biotechnology, vol. 17, no. 5, pp. 493-498, October 2006.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.W. G. Wilson, A. M. Deroos, and E. Mccauley, "Spatial instabilities within the diffusive lotka-volterra system: Individual-based simulation results," Theoretical Population Biology, vol. 43, no. 1, pp. 91-127, February 1993.K. Kruse and J. Elf. Kinetics in spatially extended systems. In Z. Szallasi, J. Stelling, and V. Periwal, editors, System Modeling in Cellular Biology. From Concepts to Nuts and Bolts, pages 177–198. MIT Press, Cambridge, MA, 2006.M. A. Gibson and J. Bruck, "Efficient exact stochastic simulation of chemical systems with many species and many channels," The Journal of Physical Chemistry A, vol. 104, no. 9, pp. 1876-1889, March 2000.R. M. Fujimoto, Parallel and Distributed Simulation Systems (Wiley Series on Parallel and Distributed Computing). Wiley-Interscience, January 2000.Y. Yao and Y. Zhang, “Solution for analytic simulation based on parallel processing,” Journal of System Simulation, vol. 20, No.24, pp. 6617–6621, 2008.G. Chen and B. K. Szymanski, "Dsim: scaling time warp to 1,033 processors," in WSC '05: Proceedings of the 37th conference on Winter simulation. Winter Simulation Conference, 2005, pp. 346-355.M. Jeschke, A. Park, R. Ewald, R. Fujimoto, and A. M. Uhrmacher, "Parallel and distributed spatial simulation of chemical reactions," in 2008 22nd Workshop on Principles of Advanced and Distributed Simulation. Washington, DC, USA: IEEE, June 2008, pp. 51-59.B. Wang, Y. Yao, Y. Zhao, B. Hou, and S. Peng, "Experimental analysis of optimistic synchronization algorithms for parallel simulation of reaction-diffusion systems," High Performance Computational Systems Biology, International Workshop on, vol. 0, pp. 91-100, October 2009.L. Dematté and T. Mazza, "On parallel stochastic simulation of diffusive systems," in Computational Methods in Systems Biology, M. Heiner and A. M. Uhrmacher, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5307, ch. 16, pp. 191-210.D. R. Jefferson, "Virtual time," ACM Trans. Program. Lang. Syst., vol. 7, no. 3, pp. 404-425, July 1985.J. S. Steinman, "Breathing time warp," SIGSIM Simul. Dig., vol. 23, no. 1, pp. 109-118, July 1993. [Online]. Available: http://dx.doi.org/10.1145/174134.158473 S. K. Park and K. W. Miller, "Random number generators: good ones are hard to find," Commun. ACM, vol. 31, no. 10, pp. 1192-1201, October 1988.
APA, Harvard, Vancouver, ISO, and other styles
27

Hong, Jeongkwan, Minho Won, and Hyunju Ro. "The Molecular and Pathophysiological Functions of Members of the LNX/PDZRN E3 Ubiquitin Ligase Family." Molecules 25, no. 24 (December 15, 2020): 5938. http://dx.doi.org/10.3390/molecules25245938.

Full text
Abstract:
The ligand of Numb protein-X (LNX) family, also known as the PDZRN family, is composed of four discrete RING-type E3 ubiquitin ligases (LNX1, LNX2, LNX3, and LNX4), and LNX5 which may not act as an E3 ubiquitin ligase owing to the lack of the RING domain. As the name implies, LNX1 and LNX2 were initially studied for exerting E3 ubiquitin ligase activity on their substrate Numb protein, whose stability was negatively regulated by LNX1 and LNX2 via the ubiquitin-proteasome pathway. LNX proteins may have versatile molecular, cellular, and developmental functions, considering the fact that besides these proteins, none of the E3 ubiquitin ligases have multiple PDZ (PSD95, DLGA, ZO-1) domains, which are regarded as important protein-interacting modules. Thus far, various proteins have been isolated as LNX-interacting proteins. Evidence from studies performed over the last two decades have suggested that members of the LNX family play various pathophysiological roles primarily by modulating the function of substrate proteins involved in several different intracellular or intercellular signaling cascades. As the binding partners of RING-type E3s, a large number of substrates of LNX proteins undergo degradation through ubiquitin-proteasome system (UPS) dependent or lysosomal pathways, potentially altering key signaling pathways. In this review, we highlight recent and relevant findings on the molecular and cellular functions of the members of the LNX family and discuss the role of the erroneous regulation of these proteins in disease progression.
APA, Harvard, Vancouver, ISO, and other styles
28

Facco Rodrigues, Vinicius, Ivam Guilherme Wendt, Rodrigo da Rosa Righi, Cristiano André da Costa, Jorge Luis Victória Barbosa, and Antonio Marcos Alberti. "Brokel: Towards enabling multi-level cloud elasticity on publish/subscribe brokers." International Journal of Distributed Sensor Networks 13, no. 8 (August 2017): 155014771772886. http://dx.doi.org/10.1177/1550147717728863.

Full text
Abstract:
Internet of Things networks together with the data that flow between networked smart devices are growing at unprecedented rates. Often brokers, or intermediaries nodes, combined with the publish/subscribe communication model represent one of the most used strategies to enable Internet of Things applications. At scalability viewpoint, cloud computing and its main feature named resource elasticity appear as an alternative to solve the use of over-provisioned clusters, which normally present a fixed number of resources. However, we perceive that today the elasticity and Pub/Sub duet presents several limitations, mainly related to application rewrite, single cloud elasticity limited to one level and false-positive resource reorganization actions. Aiming at bypassing the aforesaid problems, this article proposes Brokel, a multi-level elasticity model for Pub/Sub brokers. Users, things, and applications use Brokel as a centralized messaging service broker, but in the back-end the middleware provides better performance and cost (used resources × performance) on message delivery using virtual machine (VM) replication. Our scientific contribution regards the multi-level, orchestrator, and broker, and the addition of a geolocation domain name system service to define the most suitable entry point in the Pub/Sub architecture. Different execution scenarios and metrics were employed to evaluate a Brokel prototype using VMs that encapsulate the functionalities of Mosquitto and RabbitMQ brokers. The obtained results were encouraging in terms of application time, message throughput, and cost (application time × resource usage) when comparing elastic and non-elastic executions.
APA, Harvard, Vancouver, ISO, and other styles
29

Klots, Y. P., I. V. Muliar, V. M. Cheshun, and O. V. Burdyug. "USE OF DISTRIBUTED HASH TABLES TO PROVIDE ACCESS TO CLOUD SERVICES." Collection of scientific works of the Military Institute of Kyiv National Taras Shevchenko University, no. 67 (2020): 85–95. http://dx.doi.org/10.17721/2519-481x/2020/67-09.

Full text
Abstract:
In the article the urgency of the problem of granting access to services of distributed cloud system is disclosed, in particular, the peer distributed cloud system is characterized. The process of interaction of the main components is provided to access the domain name web resource. It is researched that the distribution of resources between nodes of a peer distributed cloud system with the subsequent provision of services on request is implemented using the Kademlia protocol on a local network or Internet and contains processes for publishing the resource at the initial stage of its owner, replication and directly providing access to resources. Application of modern technologies of adaptive information security systems does not allow full control over the information flows of the cloud computing environment, since they function at the upper levels of the hierarchy. Therefore, to create effective mechanisms for protecting software in a cloud computing environment, it is necessary to develop new threat models and to create methods for displaying computer attacks that allow operatively to identify hidden and potentially dangerous processes of information interaction. Rules of access form the basis of security policy and include restrictions on the mechanisms of initialization processes access. Under the developed operations model, the formalized description of hidden threats is reduced to the emergence of context-dependent transitions in the multigraph transactions. The method of granting access to the services of the distributed cloud system is substantiated. It is determined that the Distributed Hash Table (DHT) infrastructure is used to find a replication node that has a replica of the requested resource or part of it. The study identified the stages of identification of the node's validation. The process of adding a new node, validating authenticity, publishing a resource, and accessing a resource is described in the form of a step-by-step sequence of actions within the framework of the method of granting access to services of a distributed cloud system by graphical description of information flows, interaction of processes of information and objects processing.
APA, Harvard, Vancouver, ISO, and other styles
30

Dos Santos, Diego Pinto, Candice Müller, Fabio D'Agostini, Maria Cristina F. De Castro, and Fernando C. De Castro. "Symbol Synchronization for OFDM Receivers with FFT Transport Delay Compensation." Journal of Circuits, Systems and Computers 24, no. 05 (April 8, 2015): 1550076. http://dx.doi.org/10.1142/s0218126615500760.

Full text
Abstract:
This paper proposes a new blind approach for time synchronization of orthogonal frequency division multiplexing (OFDM) receivers (RX). It is largely known that the OFDM technique has been successfully applied to a wide variety of digital communications systems over the past several years — IEEE 802.16 WiMax, 3GPP-LTE, IEEE 802.22, DVB T/H, ISDB-T, to name a few. We focus on the synchronization for the ISDB-T digital television system, currently adopted by several South American countries. The proposed approach uses the coarse synchronization to estimate the initial time reference and then, the fine synchronization keeps tracking the transmitter (TX) time reference. The innovation on the proposed approach regards to the closed loop control stabilization of the fine synchronization. It uses a smith predictor and a differential estimator, which estimates the difference between TX and RX clock frequencies. The proposed method allows the RX to track the TX time reference with high precision ([Formula: see text] sample fraction). Thus, the carriers phase rotation issue due to incorrect time reference is minimized, and it does not affect the proper RX IQ symbols demodulation process. The RX internal time reference is adjusted based on pilot symbols, called scattered pilots (SPs) in the context of the ISDB-T standard, which are inserted in the frequency domain at the inverse fast Fourier transform (IFFT) input in the TX. The averaged progressive phase rotation of the received SPs at the fast Fourier transform (FFT) output is used to compute the time misalignment. This misalignment is used to adjust the RX fine time synchronism. Furthermore, the proposed method has been implemented in an ISDB-T RX. The FPGA-based receiver has been evaluated over several multipath, Doppler and AWGN channel models.
APA, Harvard, Vancouver, ISO, and other styles
31

Thomason, Larry W., Nicholas Ernest, Luis Millán, Landon Rieger, Adam Bourassa, Jean-Paul Vernier, Gloria Manney, Beiping Luo, Florian Arfeuille, and Thomas Peter. "A global space-based stratospheric aerosol climatology: 1979–2016." Earth System Science Data 10, no. 1 (March 12, 2018): 469–92. http://dx.doi.org/10.5194/essd-10-469-2018.

Full text
Abstract:
Abstract. We describe the construction of a continuous 38-year record of stratospheric aerosol optical properties. The Global Space-based Stratospheric Aerosol Climatology, or GloSSAC, provided the input data to the construction of the Climate Model Intercomparison Project stratospheric aerosol forcing data set (1979–2014) and we have extended it through 2016 following an identical process. GloSSAC focuses on the Stratospheric Aerosol and Gas Experiment (SAGE) series of instruments through mid-2005, and on the Optical Spectrograph and InfraRed Imager System (OSIRIS) and the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) data thereafter. We also use data from other space instruments and from ground-based, air, and balloon borne instruments to fill in key gaps in the data set. The end result is a global and gap-free data set focused on aerosol extinction coefficient at 525 and 1020 nm and other parameters on an “as available” basis. For the primary data sets, we developed a new method for filling the post-Pinatubo eruption data gap for 1991–1993 based on data from the Cryogenic Limb Array Etalon Spectrometer. In addition, we developed a new method for populating wintertime high latitudes during the SAGE period employing a latitude-equivalent latitude conversion process that greatly improves the depiction of aerosol at high latitudes compared to earlier similar efforts. We report data in the troposphere only when and where it is available. This is primarily during the SAGE II period except for the most enhanced part of the Pinatubo period. It is likely that the upper troposphere during Pinatubo was greatly enhanced over non-volcanic periods and that domain remains substantially under-characterized. We note that aerosol levels during the OSIRIS/CALIPSO period in the lower stratosphere at mid- and high latitudes is routinely higher than what we observed during the SAGE II period. While this period had nearly continuous low-level volcanic activity, it is possible that the enhancement in part reflects deficiencies in the data set. We also expended substantial effort to quality assess the data set and the product is by far the best we have produced. GloSSAC version 1.0 is available in netCDF format at the NASA Atmospheric Data Center at https://eosweb.larc.nasa.gov/. GloSSAC users should cite this paper and the data set DOI (https://doi.org/10.5067/GloSSAC-L3-V1.0).
APA, Harvard, Vancouver, ISO, and other styles
32

Marcon, Yannick, Tom Bishop, Demetris Avraam, Xavier Escriba-Montagut, Patricia Ryser-Welch, Stuart Wheater, Paul Burton, and Juan R. González. "Orchestrating privacy-protected big data analyses of data from different resources with R and DataSHIELD." PLOS Computational Biology 17, no. 3 (March 30, 2021): e1008880. http://dx.doi.org/10.1371/journal.pcbi.1008880.

Full text
Abstract:
Combined analysis of multiple, large datasets is a common objective in the health- and biosciences. Existing methods tend to require researchers to physically bring data together in one place or follow an analysis plan and share results. Developed over the last 10 years, the DataSHIELD platform is a collection of R packages that reduce the challenges of these methods. These include ethico-legal constraints which limit researchers’ ability to physically bring data together and the analytical inflexibility associated with conventional approaches to sharing results. The key feature of DataSHIELD is that data from research studies stay on a server at each of the institutions that are responsible for the data. Each institution has control over who can access their data. The platform allows an analyst to pass commands to each server and the analyst receives results that do not disclose the individual-level data of any study participants. DataSHIELD uses Opal which is a data integration system used by epidemiological studies and developed by the OBiBa open source project in the domain of bioinformatics. However, until now the analysis of big data with DataSHIELD has been limited by the storage formats available in Opal and the analysis capabilities available in the DataSHIELD R packages. We present a new architecture (“resources”) for DataSHIELD and Opal to allow large, complex datasets to be used at their original location, in their original format and with external computing facilities. We provide some real big data analysis examples in genomics and geospatial projects. For genomic data analyses, we also illustrate how to extend the resources concept to address specific big data infrastructures such as GA4GH or EGA, and make use of shell commands. Our new infrastructure will help researchers to perform data analyses in a privacy-protected way from existing data sharing initiatives or projects. To help researchers use this framework, we describe selected packages and present an online book (https://isglobal-brge.github.io/resource_bookdown).
APA, Harvard, Vancouver, ISO, and other styles
33

Gornostaeva, Yuliya A. "Metaphorical Models of Discourse Designing of the Negative Image of Russia in Spanish Media." Vestnik of Northern (Arctic) Federal University. Series Humanitarian and Social Sciences, no. 6 (December 15, 2020): 47–53. http://dx.doi.org/10.37482/2687-1505-v062.

Full text
Abstract:
This article deals with the problem of discourse designing of the negative image of Russia in Spanish mass media using metaphorical models with the target domain “Russia”. Within the framework of this study, a metaphorical model is defined as a mental scheme of connections between different conceptual domains, which is formed and fixed in the minds of native speakers. The material includes Spanish articles from reputable sources (El País and BBC News in Spanish) mentioning Russia (over 500,000 characters in total). With the use of discourse, corpus and lexical-semantic analysis as well as conceptual analysis of metaphorical models, the author found that the negative image of Russia is constructed in Spanish political media discourse through the following metaphorical models: “Russia is the country of Putin”, “Russia is the heir to the USSR” and “Russia is a pseudo-saviour of the world”. The following frames can be distinguished within the structure of the model “Russia is the country of Putin”: 1) Russia belongs to Putin; 2) Putin is a lifelong president; 3) Putinism is an ideology of Russian citizens. The metaphorical model “Russia is the heir to the USSR” includes the following stereotypical scenarios: 1) authoritarian, undemocratic methods of governance; 2) “Soviet” way of living; 3) backward economy. The frame that exists within the model “Russia is a pseudo-saviour of the world” actualizes the scenario of providing humanitarian aid for show. Among the lexical and grammatical representatives of these metaphorical models are the following: prepositional construction with de and the proper name Putin; adjectives with the semes “infinitely long, eternal” and “entire”; adjectives containing semes of distance/separateness; lexeme soviético, etc. The nominative field of the considered image contains both direct nominations, represented by the toponyms Rusia, Moscú and Kremlin, and metaphors that are mainly related to the size of the state. The stereotypical features of the source domain make it possible to create a negative image of Russia as an authoritarian, undemocratic state with an almost monarchical system, outdated Soviet governance methods, saviour-ofthe-world ambitions, and ostentatious behaviour in the international arena.
APA, Harvard, Vancouver, ISO, and other styles
34

Humphreys, David, A. Kupresanin, M. D. Boyer, J. Canik, C. S. Chang, E. C. Cyr, R. Granetz, et al. "Advancing Fusion with Machine Learning Research Needs Workshop Report." Journal of Fusion Energy 39, no. 4 (August 2020): 123–55. http://dx.doi.org/10.1007/s10894-020-00258-1.

Full text
Abstract:
Abstract Machine learning and artificial intelligence (ML/AI) methods have been used successfully in recent years to solve problems in many areas, including image recognition, unsupervised and supervised classification, game-playing, system identification and prediction, and autonomous vehicle control. Data-driven machine learning methods have also been applied to fusion energy research for over 2 decades, including significant advances in the areas of disruption prediction, surrogate model generation, and experimental planning. The advent of powerful and dedicated computers specialized for large-scale parallel computation, as well as advances in statistical inference algorithms, have greatly enhanced the capabilities of these computational approaches to extract scientific knowledge and bridge gaps between theoretical models and practical implementations. Large-scale commercial success of various ML/AI applications in recent years, including robotics, industrial processes, online image recognition, financial system prediction, and autonomous vehicles, have further demonstrated the potential for data-driven methods to produce dramatic transformations in many fields. These advances, along with the urgency of need to bridge key gaps in knowledge for design and operation of reactors such as ITER, have driven planned expansion of efforts in ML/AI within the US government and around the world. The Department of Energy (DOE) Office of Science programs in Fusion Energy Sciences (FES) and Advanced Scientific Computing Research (ASCR) have organized several activities to identify best strategies and approaches for applying ML/AI methods to fusion energy research. This paper describes the results of a joint FES/ASCR DOE-sponsored Research Needs Workshop on Advancing Fusion with Machine Learning, held April 30–May 2, 2019, in Gaithersburg, MD (full report available at https://science.osti.gov/-/media/fes/pdf/workshop-reports/FES_ASCR_Machine_Learning_Report.pdf). The workshop drew on broad representation from both FES and ASCR scientific communities, and identified seven Priority Research Opportunities (PRO’s) with high potential for advancing fusion energy. In addition to the PRO topics themselves, the workshop identified research guidelines to maximize the effectiveness of ML/AI methods in fusion energy science, which include focusing on uncertainty quantification, methods for quantifying regions of validity of models and algorithms, and applying highly integrated teams of ML/AI mathematicians, computer scientists, and fusion energy scientists with domain expertise in the relevant areas.
APA, Harvard, Vancouver, ISO, and other styles
35

Jardine, Meg, Zien Zhou, Hiddo J. Lambers Heerspink, Carinna Hockham, Qiang Li, Rajiv Agarwal, George L. Bakris, et al. "Kidney, Cardiovascular, and Safety Outcomes of Canagliflozin according to Baseline Albuminuria." Clinical Journal of the American Society of Nephrology 16, no. 3 (February 22, 2021): 384–95. http://dx.doi.org/10.2215/cjn.15260920.

Full text
Abstract:
Background and objectivesThe kidney protective effects of renin-angiotensin system inhibitors are greater in people with higher levels of albuminuria at treatment initiation. Whether this applies to sodium-glucose cotransporter 2 (SGLT2) inhibitors is uncertain, particularly in patients with a very high urine albumin-to-creatinine ratio (UACR; ≥3000 mg/g). We examined the association between baseline UACR and the effects of the SGLT2 inhibitor, canagliflozin, on efficacy and safety outcomes in the Canagliflozin and Renal Endpoints in Diabetes with Established Nephropathy Clinical Evaluation (CREDENCE) randomized controlled trial.Design, setting, participants, & measurementsThe study enrolled 4401 participants with type 2 diabetes, an eGFR of 30 to <90 ml/min per 1.73 m2, and UACR of >300 to 5000 mg/g. Using Cox proportional hazards regression, we examined the relative and absolute effects of canagliflozin on kidney, cardiovascular, and safety outcomes according to a baseline UACR of ≤1000 mg/g (n=2348), >1000 to <3000 mg/g (n=1547), and ≥3000 mg/g (n=506). In addition, we examined the effects of canagliflozin on UACR itself, eGFR slope, and the intermediate outcomes of glycated hemoglobin, body weight, and systolic BP.ResultsOverall, higher UACR was associated with higher rates of kidney and cardiovascular events. Canagliflozin reduced efficacy outcomes for all UACR levels, with no evidence that relative benefits varied between levels. For example, canagliflozin reduced the primary composite outcome by 24% (hazard ratio [HR], 0.76; 95% confidence interval [95% CI], 0.56 to 1.04) in the lowest UACR subgroup, 28% (HR, 0.72; 95% CI, 0.56 to 0.93) in the UACR subgroup >1000 to <3000 mg/g, and 37% (HR, 0.63; 95% CI, 0.47 to 0.84) in the highest subgroup (Pheterogeneity=0.55). Absolute risk reductions for kidney outcomes were greater in participants with higher baseline albuminuria; the number of primary composite events prevented across ascending UACR categories were 17 (95% CI, 3 to 38), 45 (95% CI, 9 to 81), and 119 (95% CI, 35 to 202) per 1000 treated participants over 2.6 years (Pheterogeneity=0.02). Rates of kidney-related adverse events were lower with canagliflozin, with a greater relative reduction in higher UACR categories.ConclusionsCanagliflozin safely reduces kidney and cardiovascular events in people with type 2 diabetes and severely increased albuminuria. In this population, the relative kidney benefits were consistent over a range of albuminuria levels, with greatest absolute kidney benefit in those with an UACR ≥3000 mg/g.Clinical Trial registry name and registration number:ClinicalTrials.gov: CREDENCE, NCT02065791.PodcastThis article contains a podcast at https://www.asn-online.org/media/podcast/CJASN/2021_02_22_CJN15260920_final.mp3
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Z., G. Jones, D. Aitken, S. Balogun, Z. Zhou, L. Blizzard, F. Cicuttini, and B. Antony. "POS0280 ASSOCIATION OF COMPLEMENTARY AND ALTERNATIVE MEDICINE USE WITH KNEE SYMPTOMS AND KNEE STRUCTURAL CHANGES OVER 2.6 YEARS: A POPULATION-BASED COHORT STUDY OF TASMANIAN OLDER ADULTS." Annals of the Rheumatic Diseases 80, Suppl 1 (May 19, 2021): 365.2–365. http://dx.doi.org/10.1136/annrheumdis-2021-eular.2762.

Full text
Abstract:
Background:There is increasing use of complementary and alternative medicines (CAMs) alone or as an adjuvant therapy to conventional palliative medicines.1 However, there remains clinical uncertainty about the benefit of CAMs in the management of osteoarthritis in older population.Objectives:To describe the association between CAM use (alone or in combination with conventional analgesics) with knee symptoms and structural changes amongst a representative sample of Tasmanian older adults.Methods:A total of 1,099 participants were selected from the Tasmania Older Adult Cohort Study (TASOAC), an ongoing prospective population-based study. Exposure to CAM and conventional medications was classified into four categories according to the national drug code directory: 2 CAM only, conventional analgesics only, both CAM and conventional analgesics, and neither CAMs nor conventional analgesics. Knee pain was assessed using the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), and a 1.5-T MRI of the right knee was performed at baseline and follow-up (around 2.6 years). Longitudinal associations were assessed using mixed effect linear models.Results:At baseline, participants‘ mean age was 63, 86.5% (n=951) reported any medication use. The prevalence of CAM use was 35.0% and of conventional analgesics was 58.6%. Over follow-up, the analgesic only group had a significant increase in WOMAC pain, function, and stiffness scores compared to those who took neither CAMs nor conventional analgesics. There was a statistically significant femoral cartilage volume loss across all four groups, and no statistical difference was found between participants who takes both CAMs and analgesics group and the reference group, but participant in the CAM only or the analgesics only groups loss statistically significant more femoral cartilage volume than the reference group (Table 1).Table 1.Association of change in clinical knee symptoms and knee structural changes over 2.6 years with different medications groups.Mean change for reference group*Change for each category, coefficient (95% confident intervals)CAMsBothAnalgesicsReference group*No. of participants327128257387327WOMAC pain (5-50)-0.95 (-1.42, -0.48)0.04 (-0.85, 0.93)0.32 (-0.4, 1.04)0.78 (0.13, 1.43)RefWOMAC function (17-170)-3.09 (-4.52, -1.67)1.02 (-1.7, 3.73)1.39 (-0.81, 3.59)2.32 (0.33, 4.31)RefWOMAC stiffness (2-20)-0.39 (-0.62, -0.17)0.15 (-0.28, 0.58)0.35 (0, 0.7)0.40 (0.09, 0.72)RefFemoral cartilage volume (mL)-187.98 (-228.79, -147.18)-113.81 (-192.60, -35.03)-1.92 (-65.00, 61.17)-127.19 (-186.31, -68.06)Ref*Reference group=participants taken neither CAMs nor conventional analgesicsConclusion:CAM use alone or in combination with conventional analgesics may associate with slower progression of knee pain. Conclusive evidence on the longitudinal benefits of CAM in the management of osteoarthritis among older adults warrants more studies.References:[1]Steel A, McIntyre E, Harnett J, et al. Complementary medicine use in the Australian population: Results of a nationally-representative cross-sectional survey. Sci Rep 2018;8:17325.[2]National Center for Health Statistics. Long-term Care Drug Database System: Drugs by NDC Class Code, Drug Code and Name 2007 Available from: https://www.cdc.gov/nchs/data/nnhsd/DrugsbyNDCClass3.pdf [accessed date: 2020 23 December].The data were fitted using mixed effect linear models, which were constructed by entering baseline medication group, phase, the interaction between medication group and phase, covariates (baseline age, sex, body mass index [BMI], baseline value of outcome), the interaction between the covariates and phase, random intercept, and random slope on phase (time).Disclosure of Interests:None declared
APA, Harvard, Vancouver, ISO, and other styles
37

Jara, Antonio J., Miguel A. Zamora, and Antonio Skarmeta. "Glowbal IP: An Adaptive and Transparent IPv6 Integration in the Internet of Things." Mobile Information Systems 8, no. 3 (2012): 177–97. http://dx.doi.org/10.1155/2012/819250.

Full text
Abstract:
The Internet of Things (IoT) requires scalability, extensibility and a transparent integration of multi-technology in order to reach an efficient support for global communications, discovery and look-up, as well as access to services and information. To achieve these goals, it is necessary to enable a homogenous and seamless machine-to-machine (M2M) communication mechanism allowing global access to devices, sensors and smart objects. In this respect, the proposed answer to these technological requirements is called Glowbal IP, which is based on a homogeneous access to the devices/sensors offered by the IPv6 addressing and core network. Glowbal IP's main advantages with regard to 6LoWPAN/IPv6 are not only that it presents a low overhead to reach a higher performance on a regular basis, but also that it determines the session and identifies global access by means of a session layer defined over the application layer. Technologies without any native support for IP are thereby adaptable to IP e.g. IEEE 802.15.4 and Bluetooth Low Energy. This extension towards the IPv6 network opens access to the features and methods of the devices through a homogenous access based on WebServices (e.g. RESTFul/CoAP). In addition to this, Glowbal IP offers global interoperability among the different devices, and interoperability with external servers and users applications. All in all, it allows the storage of information related to the devices in the network through the extension of the Domain Name System (DNS) from the IPv6 core network, by adding the Service Directory extension (DNS-SD) to store information about the sensors, their properties and functionality. A step forward in network-based information systems is thereby reached, allowing a homogenous discovery, and access to the devices from the IoT. Thus, the IoT capabilities are exploited by allowing an easier and more transparent integration of the end users applications with sensors for the future evaluations and use cases.
APA, Harvard, Vancouver, ISO, and other styles
38

Palmer, Lance E., Xin Zhou, Clay McLeod, Evadnie Rampersaud, Jeremie H. Estepp, Xing Tang, Jian Wang, et al. "Data Access and Interactive Visualization of Whole Genome Sequence of Sickle Cell Patients within the St. Jude Cloud." Blood 132, Supplement 1 (November 29, 2018): 723. http://dx.doi.org/10.1182/blood-2018-99-116597.

Full text
Abstract:
Abstract With the increase in availability of high depth whole genome sequencing (WGS) data of individuals with sickle cell disease (SCD), easy access to the raw sequencing data remains an issue due to technical and regulatory challenges. A compliant system that can provide facile data access would accelerate scientific discovery of genetic variants associated with clinical phenotypes. Cloud storage and computing provide an ultimate solution to this data access, which we have shown through the St. Jude Cloud (https://stjude.cloud) where over 5000 whole genome sequences for pediatric cancer patients are being shared in collaboration with DNANexus and Microsoft. Here we expand the St. Jude Cloud to sickle cell disease data through the Sickle Genome Project (SGP) Data Portal (https://pecan.stjude.org/permalink/sgp) to allow instantaneous raw data access (following data access committee approval), as well as visualization of genotype calls at individual level in a novel genome. The SGP WGS data was generated from 871 patients from St. Jude Children's Research Hospital (St. Jude) through the Sickle Cell Clinical Research and Intervention Program (SCCRIP, Pediatr Blood Cancer. 2018 May 24: e27228) and from Baylor College of Medicine (BCM). All study participants provided informed consent for genomic study and data sharing on IRB-approved research protocols. The SGP data portal will have multi-tiered access. All users will have access to a general heat map view which shows anonymized patient clinical values (e.g., fetal hemoglobin (HbF), mean corpuscular volume (MCV), hemoglobin concentration (Hb)) and relevant SCD modifying variants (e.g., Beta-globin locus, MYB, BCL11A, HBA). The GenomePaint bowser allows for viewing coding and noncoding variants. Displayed with each variant will be a visual indication of the median fetal hemoglobin values for patients homozygous for the reference allele, heterozygous, or homozygous for the alternative allele. The browser also displays erythroid specific DNA-accessibility and epigenetic marks and indicates variant that may disrupt erythroid specific transcription factor binding sites (GATA1 and BCL11A). For anonymization purposes, within the genome browser and heat map views, clinical values and the patients age will be binned into ranges when displayed as single or low count values. Lastly, the ProteinPaint tool (Zhang and Zhou, Nature Gen, Dec 29, 2015) will enable visualization and filtering of variants with reference to protein domain and amino acid sequence. To access processed data such as BAM and VCF files for downstream analyses, a user will be required to apply for access which will be adjudicated by a data access committee. Verified researchers will be granted access to clinical data in a manner consistent with the protocol specific informed consent documentation and protocol under which the sequencing was performed. This may include coded clinical and demographic data when specified by the research protocol and informed consent The SGP data set will be one of the first WGS datasets from primarily African American Sickle cell patients to be made available to clinicians and researchers worldwide. In addition, no SCD-centric data portal exists that contains controlled access to data and provides graphical tools for visual analysis. The combination of the visual tools and ability to download tools provides the scientific community an invaluable resource for studying sickle cell disease. Figure. Figure. Disclosures Estepp: Daiichi Sankyo: Consultancy; NHLBI: Research Funding; Global Blood Therapeutics: Consultancy, Research Funding; ASH Scholar: Research Funding. Hankins:Global Blood Therapeutics: Research Funding; bluebird bio: Consultancy; NCQA: Consultancy; Novartis: Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
39

Rhyne, Andrew L., Michael F. Tlusty, Joseph T. Szczebak, and Robert J. Holmberg. "Expanding our understanding of the trade in marine aquarium animals." PeerJ 5 (January 26, 2017): e2949. http://dx.doi.org/10.7717/peerj.2949.

Full text
Abstract:
The trade of live marine animals for home and public aquaria has grown into a major global industry. Millions of marine fishes and invertebrates are removed from coral reefs and associated habitats each year. The majority are imported into the United States, with the remainder sent to Europe, Japan, and a handful of other countries. Despite the recent growth and diversification of the aquarium trade, to date, data collection is not mandatory, and hence comprehensive information on species volume and diversity is lacking. This lack of information makes it impossible to study trade pathways. Without species-specific volume and diversity data, it is unclear how importing and exporting governments can oversee this industry effectively or how sustainability should be encouraged. To expand our knowledge and understanding of the trade, and to effectively communicate this new understanding, we introduce the publically-available Marine Aquarium Biodiversity and Trade Flow online database (https://www.aquariumtradedata.org/). This tool was created to communicate the volume and diversity of marine fishes and/or invertebrates imported into the US over three complete years (2008, 2009, and 2011) and three partial years (2000, 2004, 2005). To create this tool, invoices pertaining to shipments of live marine fishes and invertebrates were scanned and analyzed for species name, species quantities, country of origin, port of entry, and city of import destination. Here we focus on the analysis of the later three years of data and also produce an estimate for the entirety of 2000, 2004, and 2005. The three-year aggregate totals (2008, 2009, 2011) indicate that just under 2,300 fish and 725 invertebrate species were imported into the US cumulatively, although just under 1,800 fish and 550 invertebrate species were traded annually. Overall, the total number of live marine animals decreased between 2008 and 2011. In 2008, 2009, and 2011, the total number of individual fish (8.2, 7.3, and 6.9 million individuals) and invertebrates (4.2, 3.7, and 3.6 million individuals) assessed by analyzing the invoice data are roughly 60% of the total volumes recorded through the Law Enforcement Management Information System (LEMIS) dataset. Using these complete years, we back-calculated the number of individuals of both fishes and invertebrates imported in 2000, 2004, and 2005. These estimates (9.3, 10.8, and 11.2 million individual fish per year) were consistent with the three years of complete data. We also use these data to understand the global trade in two species (Banggai cardinalfish,Pterapogon kauderni, and orange clownfish,Amphiprion ocellaris/percula) recently considered for Endangered Species Act listing. Aquariumtradedata.org can help create more effective management plans for the traded species, and ideally could be implemented at key trade ports to better assess the global trade of aquatic wildlife.
APA, Harvard, Vancouver, ISO, and other styles
40

Hartigan, Joshua, Shev MacNamara, Lance Leslie, and Milton Speer. "High resolution simulations of a tornadic storm affecting Sydney." ANZIAM Journal 62 (May 23, 2021): C1—C15. http://dx.doi.org/10.21914/anziamj.v62.16113.

Full text
Abstract:
On 16 December 2015 a severe thunderstorm and associated tornado affected Sydney causing widespread damage and insured losses of $206 million. Severe impacts occurred in Kurnell, requiring repairs to Sydney's desalination plant which supplies up to 15% of Sydney water during drought, with repairs only completed at the end of 2018. Climatologically, this storm was unusual as it occurred during the morning and had developed over the ocean, rather than developing inland during the afternoon as is the case for many severe storms impacting the Sydney region. Simulations of the Kurnell storm were conducted using the Weather Research and Forecasting (WRF) model on a double nested domain using the Morrison microphysics scheme and the NSSL 2-moment 4-ice microphysics scheme. Both simulations produced severe storms that followed paths similar to the observed storm. However, the storm produced under the Morrison scheme did not have the same morphology as the observed storm. Meanwhile, the storm simulated with the NSSL scheme displayed cyclical low- and mid-level mesocyclone development, which was observed in the Kurnell storm, highlighting that the atmosphere supported the development of severe rotating thunderstorms with the potential for tornadogenesis. The NSSL storm also produced severe hail and surface winds, similar to observations. The ability of WRF to simulate general convective characteristics and a storm similar to that observed displays the applicability of this model to study the causes of severe high-impact Australian thunderstorms. References J. T. Allen and E. R. Allen. A review of severe thunderstorms in Australia. Atmos. Res., 178:347–366, 2016. doi:10.1016/j.atmosres.2016.03.011. Bureau of Meteorology. Severe Storms Archive, 2020. URL http://www.bom.gov.au/australia/stormarchive/. D. T. Dawson II, M. Xue, J. A. Milbrandt, and M. K. Yau. Comparison of evaporation and cold pool development between single-moment and multimoment bulk microphysics schemes in idealized simulations of tornadic thunderstorms. Month. Wea. Rev., 138:1152–1171, 2010. doi:10.1175/2009MWR2956.1. H. Hersbach, B. Bell, P. Berrisford, S. Hirahara, A. Horanyi, J. Munoz-Sabater, J. Nicolas, C. Peubey, R. Radu, D. Schepers, et al. The ERA5 global reanalysis. Quart. J. Roy. Meteor. Soc., 146:1999–2049, 2020. doi:10.1002/qj.3803. Insurance Council of Australia. Victorian bushfire losses push summer catastrophe bill past $550m, 2016. E. R. Mansell, C. L. Ziegler, and E. C. Bruning. Simulated electrification of a small thunderstorm with two-moment bulk microphysics. J. Atmos. Sci., 67:171–194, 2010. doi:10.1175/2009JAS2965.1. R. C. Miller. Notes on analysis and severe-storm forecasting procedures of the Air Force Global Weather Central, volume 200. Air Weather Service, 1972. URL https://apps.dtic.mil/sti/citations/AD0744042. H. Morrison, J. A. Curry, and V. I. Khvorostyanov. A new double-moment microphysics parameterization for application in cloud and climate models. Part I: Description. J. Atmos. Sci., 62:1665–1677, 2005. doi:10.1175/JAS3446.1. H. Morrison, G. Thompson, and V. Tatarskii. Impact of cloud microphysics on the development of trailing stratiform precipitation in a simulated squall line: Comparison of one- and two-moment schemes. Month. Wea. Rev., 137:991–1007, 2009. doi:10.1175/2008MWR2556.1. J. G. Powers, J. B. Klemp, W. C. Skamarock, C. A. Davis, J. Dudhia, D. O. Gill, J. L. Coen, D. J. Gochis, R. Ahmadov, S. E. Peckham, et al. The Weather Research and Forecasting Model: Overview, system efforts, and future directions. Bull. Am. Meteor. Soc., 98:1717–1737, 2017. doi:10.1175/BAMS-D-15-00308.1. H. Richter, A. Protat, J. Taylor, and J. Soderholm. Doppler radar and storm environment observations of a maritime tornadic supercell in Sydney, Australia. In Preprints, 28th Conf. on Severe Local Storms, Portland OR, Amer. Meteor. Soc. P, 2016. W. C. Skamarock, J. B. Klemp, J. Dudhia, D. O. Gill, Z. Liu, J. Berner, W. Wang, J. G. Powers, M. G. Duda, D. Barker, and X.-Y. Huang. A description of the advanced research WRF Model version 4. Technical report, 2019. Storm Prediction Center. The Enhanced Fujita Scale (EF Scale), 2014. URL https://www.spc.noaa.gov/efscale/. R. A. Warren, H. A. Ramsay, S. T. Siems, M. J. Manton, J. R. Peter, A. Protat, and A. Pillalamarri. Radar-based climatology of damaging hailstorms in Brisbane and Sydney, Australia. Quart. J. Roy. Meteor. Soc., 146:505–530, 2020. doi:10.1002/qj.3693.
APA, Harvard, Vancouver, ISO, and other styles
41

Munshi, R., and T. Grimes. "Medication-related harm and the newspapers - what has been communicated to the public in Ireland: A systematic content analysis." International Journal of Pharmacy Practice 29, Supplement_1 (March 26, 2021): i28—i29. http://dx.doi.org/10.1093/ijpp/riab015.034.

Full text
Abstract:
Abstract Introduction Reducing the global prevalence of severe, avoidable medication-related harm (MRH) by 50% by the end of 2022 is the WHO’s third global patient safety challenge [1]. MRH is reported frequently in the academic literature, with increasing age being a key risk factor. The WHO have highlighted the need to improve public health literacy and knowledge about medications. Little is known about the frequency and nature of Irish newspaper reports about MRH. This study sought to address this gap and to examine reporting during the calendar years 2019 and 2009. Methods In this mixed-methods study, LexisNexis® [2], an online newspaper archive database, was searched for newspaper articles reporting on MRH, published in the Republic of Ireland during the calendar years 2019 and 2009. The search strategy focussed on “medication” AND “harm” AND “patient”. Quantitative data extraction aimed to describe the frequency (by count of articles) of reporting of MRH and the nature by describing the publishing newspaper titles and the reported details of: drug class(es), demographics (age or life stage, gender) of those experiencing harm and the severity of harm. Qualitatively, a systematic content analysis, using inductive coding is ongoing and will be reported separately. Research ethics committee approval for this study is not required because this is an analysis of material in the public domain. Results In total, 7098 newspaper articles were identified through database searching for 2019 (n=3217) and 2009 (n=3881). To date, 54% (3867: n=3217, 45% 2019, n=650, 9% 2009) of these were screened, of which 63 newspaper articles (n=44 2019, n=19 2009) were included and quantitative data were extracted. Within these 63 articles, 71 cases of individual people experiencing MRH were reported (52 in 2019 and 19 in 2009). The newspapers most commonly reporting MRH were Irish Daily Mail (31/63: 27 in 2019 and 4 in 2009) and Irish Times (17/63:9 in 2019 and 8 in 2009). Drug classes most frequently reported as causing MRH were central nervous system drugs (antiepileptics n=10, opioid analgesics n=5, antidepressants n=9, and anxiolytics n=1), cancer chemotherapy (23 cases) and non-steroidal anti-inflammatories (n=3). MRH was reported as being fatal (13 /71:8 in 2019 and 5 in 2009) and non-fatal (58/71), with seven cases (5 in 2019 and 2 in 2009) of permanent harm. Among the 71 individual cases of MRH, the majority were adults aged 18–64 years (n=36), children (n=7), older adults (n=8), foetus (n=3) and newborn (n=1), while the remainder did not report the person’s age. Conclusion MRH is frequently reported to the public through Irish newspapers. The study is limited by focus on newsprint media with the exclusion of other forms of digital or social media and restriction to two calendar years in a single country, which likely stifles the generalisability of findings to other contexts. Future work could explore this issue across a wider range of media platforms and examine changes in reporting over time. The study findings may support an agenda to improve the general public's exposure to information and knowledge of MRH and medication safety. References 1. Donaldson, L.J., et al., Medication without harm: WHO's third global patient safety challenge. 2017. 389(10080): p. 1680–1681. 2. https://advance-lexis-com.elib.tcd.ie/firsttime?crid=d5f713e8-8107-4efd-91cc-1e99c82cdb58&pdmfid=1519360.
APA, Harvard, Vancouver, ISO, and other styles
42

Kumar, Kishore, Durai Prabhu, Dharani Devi, Dilshada Pulikkal, Joshua Daniel, and Chezhian Subash. "How We Did Bone Marrow Transplants amidst the COVID19 Pandemic." Blood 136, Supplement 1 (November 5, 2020): 40. http://dx.doi.org/10.1182/blood-2020-138677.

Full text
Abstract:
Introduction COVID19 is the most heard name for the last six months, and the situation seems to worsen now. This pandemic has created significant stress on the healthcare system. Most of the resources are being diverted to COVID care, and care of non-COVID patients is compromised. As haematologists dealing with frail immune-compromised patients, challenges we are facing are staff attrition, near-empty blood banks, less intensive care beds, the chance of our patients getting infected with COVID during the hospital stay, fear of donors being asymptomatic COVID carriers to mention a few. In this situation, we have tried to formulate a practical approach for doing bone marrow transplants, which we have been following for the last few months at our centre. Our Transplant Protocol Initially we tried to postpone transplants. But as the COVID situation was becoming a chronic one, we formulated our ways to start transplants. Amidst lockdown, we successfully completed Autografts in myeloma and Lymphoma. We also started Allogeneic transplants including haplo-identical transplant for acute leukemia. In this hour of need, we had to strike a balance in transplant management. We tried to be practical in our decision-making skills at this hour of need. Being students of science, it was time to show practicality in ordering tests, therapy, and transfusions to the patient. Due to lockdown and general panic, the stocks in Blood Banks were at an all-time low. So transfusions and donor arrangements were be dealt judiciously. The hospital was divided into COVID and NON-COVID zones. All patients with fever and respiratory symptoms go directly to the COVID zone and get examined and tested by physicians with proper PPE as per WHO protocol.Even there are no symptoms of COVID like dry cough, fever, and throat pain, the patients entering NON-COVID also will be screened by a general physician at the single point of entry with proper protection and then patient with one attendee was allowed to come to transplant outpatient department. This helped us reduce risk to the medical professional and other patients waiting in a specialty department like ours. We had a detailed discussion about the pros and cons with patient and attendee. We admitted them in a separate block and got tested for COVID19 RTPCR. The patient had HRCT thorax to check for early radiological signs of COVID19. The blood parameters which serve as prognostic markers for COVID were checked alongside to double confirm false negativity of tests.The donor was made to stay as attendee for the patient. We shifted them inside transplant unit and observed for a week to rule out COVID symptoms. Radiological investigations were done before starting procedure. Minimal physical interaction was established and in Myeloma and Lymphoma autograft, we did around 20% dose reduction of conditioning regimen drugs. For allograft, no drug dose modification was done. The threshold for antibiotic stewardship was kept very low and high end antibiotics like Colistin / Fosfomycin were initiated early. We collected single donor platelets and kept stocks ready to avoid exposure to multiple random donors. The blood bank also was very careful in selecting donors after a thorough screening for symptoms of COVID19. Once engrafted, they were discharged early and kept on follow-up mostly by tele-consultation. The personal visits were kept to a minimum of once in four weeks. From first lockdown to date of submission, we completed nine bone marrow transplants at our centre which included three AML haplo-identical transplants. Conclusion Postponing transplant is not feasible in all situations, as few of our refractory diseases will ultimately relapse and transplant becomes the only live saving procedure. As we wait for the situation to better, the normal functioning of hospitals may take some more time.These above are the methods we are following in our MIOT hospital at Chennai, India the city which has got around 100,000 COVID19 cases during submission. Our advice is where ever feasible, we should try to stick to time tested conventional protocols. The above protocols may be useful temporarily till the COVID crisis is over. Reference Willian J. Care of hematology patients in a COVID-19 epidemic. British Journal of Hematology, 2020.Dholaria, Savani B.N. How do we plan hematopoietic cell transplant and cellular therapy with the looming COVID-19 threat? British Journal of Haematology, 2020.https://doi.org/10.1111/bjh.16597 Disclosures No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
43

Brovelli, Maria Antonia, Candan Eylül Kilsedar, and Francesco Frassinelli. "Mobile Tools for Community Scientists." Abstracts of the ICA 1 (July 15, 2019): 1–3. http://dx.doi.org/10.5194/ica-abs-1-30-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> While public participation in scientific achievements has a long history, the last decades have seen more attention and an impressive increase in the number of involved people. Citizen science, the term used for denoting such an attitude, is a very diverse practice, encompassing various forms, depths, and aims of collaboration between scientists and citizen researchers and a broad range of scientific disciplines. Different classifications of citizen science projects exist based on the degrees of influence and contributions of citizens. Haklay, Mazumdar, and Wardlaw (2018) distinguish the citizen science projects in three different classes:</p> <ol><li>Long-running citizen science, which are the traditional ones, the projects similar to those run in the past (Koboriet al., 2016; Bonney et al., 2009)</li> <li>Citizen cyberscience, strictly connected with the use of technologies (Grey, 2009) and which can be subclassified in:<ol><li>volunteer computing, where citizens offer the unused computing resources of their computers;</li><li>volunteer thinking, where citizens offer their cognitive abilities for performing tasks difficult for machines;</li><li>passive sensing, where citizens use the sensors integrated into mobile computing devices to carry outautomatic sensing tasks.</li></ol></li> <li>Community science, involving a more significant commitment of citizens also in designing and planning theproject activities in a more egalitarian (if not bottom-up) approach between scientists and citizen scientists(Jepson &amp; Ladle, 2015; Nascimento, Guimarães Pereira, &amp; Ghezzi, 2014; Breen, Dosemagen, Warren, &amp;Lippincott, 2015), which can be divided into:<ol><li>participatory sensing, where citizens use the sensors integrated into mobile computing devices to carry outsensing tasks;</li><li>Do It Yourself (DIY) science, which implies participants create their scientific tools and methodology to carry out their researches; </li><li>civic science, “which is explicitly linked to community goals and questions the state of things” (Haklay et al., 2018).</li></ol></li></ol> <p>The work presented here is of interest of community scientists which voluntarily offer their time for the development of scientific projects. Many software tools have been developed in order to simplify the insertion of data into structured forms and the aggregation and analysis of the obtained data. In recent years, the growing availability of feature-rich and low-cost smartphones have boosted the development of innovative solutions for data collection using portable devices. In this field, ODK (OpenDataKit) is widely known. It is an open-source suite of tools focused on simplicity of use, which includes an Android application for data collection. We used ODK for the first applications we developed.</p><p>One of the applications we developed using ODK is Via Regina (http://www.viaregina.eu/app). The application aims to support slow tourism in Via Regina, which is a road that overlooks the west coast of Lake Como in Northern Italy. Over the centuries, Via Regina has been a critical trade and pilgrim route in Europe. Moreover, from this road, a compact system of slow mobility paths departs, which span the mountainous region at the border between Italy and Switzerland. This region is rich in culture, regarding history, art, architecture, cuisine and people’s lifestyle. Considering collecting data on Via Regina and the paths around it would enable to rediscover and promote its culture while enjoying the territory, an Interreg project named “The Paths of Regina” started. The application developed within this project allows collecting data in predefined types: historical and cultural, morphological, touristic, and critical. Moreover, while reporting a point of interest (POI), the application asks the name, the position (through GPS or an interactive map), a picture, and optionally a video and an audio record of it (Antonovic et al., 2015).</p><p>However, since ODK application can be used only on Android devices, we developed a cross-platform application to collect similar data for the same purpose. It is available on Android, iOS, and web (http://viaregina3.como.polimi.it/app/). The application is developed using Apache Cordova, which is a mobile application development framework that enables running the application in multiple platforms. Leaflet library is used for web mapping. The data is stored in NoSQL PouchDB and CouchDB database, which enables both online and offline data collection. While reporting a POI, the application asks for its type, the user’s rating, a comment, and a picture of it either uploaded from device’s storage or taken using the camera of the mobile device. In addition to being cross-platform, it has the advantage of displaying and enabling the query of POIs reported, compared to the ODK-based version (Brovelli, Kilsedar, &amp; Zamboni, 2016). Regarding citizen science, besides the citizens using these two applications, Iubilantes, a voluntary cultural organization, has been involved in the project as community scientists. Iubilantes created slow mobility paths to walk in and around Via Regina, using their experience gained through studying ancient paths while protecting and enhancing their assets since 1996.</p><p>Mobile data collection can also be used to compensate for the lack of reference data available for land cover validation. We developed the Land Cover Collector (https://github.com/kilsedar/land-cover-collector) application for this purpose, which collects data using the nomenclature of GlobeLand30. GlobeLand30 is the first global land cover map at 30-meter resolution, provided by National Geomatics Center of China, available for 2000 and 2010 (Chen et al., 2015). There are ten land cover classes in the GlobeLand30 dataset, which are: artificial surface, bare land, cultivated land, forest, grassland, permanent snow and ice, shrubland, tundra, water body, and wetland. The collected data will be used for validating GlobeLand30 (Kilsedar, Bratic, Molinari, Minghini, &amp; Brovelli, 2018). The data is licensed under the Open Database License (ODbL) v1.0 and can be downloaded within the application in JSON format. The application is currently available in eight languages: English, Italian, Arabic, Russian, Chinese, Portuguese, French and Spanish. The technologies used are the same as the cross-platform Via Regina application. As a result, it is available on Android, iOS, and web (https://landcover.como.polimi.it/collector/); and it supports display and query of the collected data. While reporting a POI, the application asks the land cover class of it, the user’s degree of certainty on the correctness of the stated class, photos in north, east, south and west directions, and the user’s comment. Three hands-on workshops were given to teach this application and various ways to validate GlobeLand30: the first on September 1, 2018 at the World Bank in Dar es Salaam, Tanzania (in conjunction with the FOSS4G 2018 conference); the second on September 3, 2018 at the Regional Centre for Mapping of Resources for Development (RCMRD) in Nairobi, Kenya; and the third on October 1, 2018 at the Delft University of Technology in Delft, Netherlands. The workshops, run by representatives of the project's principal investigators &amp;ndash; Politecnico di Milano (Italy) and the National Geomatics Center of China (China) &amp;ndash; were attended by a total of 100 people with a background in GIS and remote sensing. (Brovelli et al., 2018).</p><p>Nonetheless, there are no widely adopted cross-platform open-source solutions or systems for on-site surveys that address the problem of information silos: isolated databases, where the information is not adequately shared but rather remains sequestered within each system, which is an obstacle to using data mining to make productive use of data of multiple systems.</p><p> PSAB (Participatory Sensing App Builder) is a platform that provides an open-source and easy to use cross-platform solution for the creation of custom smartphone applications as well as web applications and catalog service for publishing the data and make them available to everyone. It takes advantage of established standards (like XLSForm for defining the structure of the form and DublinCore for exposing metadata) as well as less known yet effective solutions, like WQ (https://wq.io), a framework developed for building reusable software platforms for citizen science. These technologies have been merged, together with other software like Django, PyCSW, PostgreSQL, in a single solution, in order to assist the user during the entire process, from the definition of the form structure, to the creation of an ad-hoc application and the publication of the collected data, inside a flexible and open-source platform.</p><p> Users registered to PSAB are allowed to create a new application by filling a web form where they can upload their XLSForm files and submit the metadata describing the data to be collected. A new application for collecting data on the field is generated and accessible via web and Android (while iOS requires a particular setup), ready to be used online and offline. The creator of each application is also the administrator of it, which means he/she is allowed to add or ban users and modify or remove existing data. Data is automatically synchronized between all the users participating in the project.</p><p> In the presentation we will show the applications we developed, starting from the ODK-based ones and coming to the PSAB application builder, and our experience related to their usage.</p>
APA, Harvard, Vancouver, ISO, and other styles
44

Al-Drees, Mohammed, Marwah M. Almasri, Mousa Al-Akhras, and Mohammed Alawairdhi. "Building a DNS Tunneling Dataset." International Journal of Sensors, Wireless Communications and Control 10 (November 24, 2020). http://dx.doi.org/10.2174/2210327910999201124205758.

Full text
Abstract:
Background:: Domain Name System (DNS) is considered the phone book of the Internet. Its main goal is to translate a domain name to an IP address that the computer can understand. However, DNS can be vulnerable to various kinds of attacks, such as DNS poisoning attacks and DNS tunneling attacks. Objective:: The main objective of this paper is to allow researchers to identify DNS tunnel traffic using machine-learning algorithms. Training machine-learning algorithms to detect DNS tunnel traffic and determine which protocol was used will help the community to speed up the process of detecting such attacks. Method:: In this paper, we consider the DNS tunneling attack. In addition, we discuss how attackers can exploit this protocol to infiltrate data breaches from the network. The attack starts by encoding data inside the DNS queries to the outside of the network. The malicious DNS server will receive the small chunk of data decoding the payload and put it together at the server. The main concern is that the DNS is a fundamental service that is not usually blocked by a firewall and receives less attention from systems administrators due to a vast amount of traffic. Results:: This paper investigates how this type of attack happens using the DNS tunneling tool by setting up an environment consisting of compromised DNS servers and compromised hosts with the Iodine tool installed in both machines. The generated dataset contains the traffic of HTTP, HTTPS, SSH, SFTP, and POP3 protocols over the DNS. No features were removed from the dataset so that researchers could utilize all features in the dataset. Conclusion:: DNS tunneling remains a critical attack that needs more attention to address. DNS tunneled environment allows us to understand how such an attack happens. We built the appropriate dataset by simulating various attack scenarios using different protocols. The created dataset contains PCAP, JSON, and CSV files to allow researchers to use different methods to detect tunnel traffic.
APA, Harvard, Vancouver, ISO, and other styles
45

Yoder, Matthew, and Dmitry Dmitriev. "Nomenclature over 5 years in TaxonWorks: Approach, implementation, limitations and outcomes." Biodiversity Information Science and Standards 5 (September 20, 2021). http://dx.doi.org/10.3897/biss.5.75441.

Full text
Abstract:
We are now over four decades into digitally managing the names of Earth's species. As the number of federating (i.e., software that brings together previously disparate projects under a common infrastructure, for example TaxonWorks) and aggregating (e.g., International Plant Name Index, Catalog of Life (CoL)) efforts increase, there remains an unmet need for both the migration forward of old data, and for the production of new, precise and comprehensive nomenclatural catalogs. Given this context, we provide an overview of how TaxonWorks seeks to contribute to this effort, and where it might evolve in the future. In TaxonWorks, when we talk about governed names and relationships, we mean it in the sense of existing international codes of nomenclature (e.g., the International Code of Zoological Nomenclature (ICZN)). More technically, nomenclature is defined as a set of objective assertions that describe the relationships between the names given to biological taxa and the rules that determine how those names are governed. It is critical to note that this is not the same thing as the relationship between a name and a biological entity, but rather nomenclature in TaxonWorks represents the details of the (governed) relationships between names. Rather than thinking of nomenclature as changing (a verb commonly used to express frustration with biological nomenclature), it is useful to think of nomenclature as a set of data points, which grows over time. For example, when synonymy happens, we do not erase the past, but rather record a new context for the name(s) in question. The biological concept changes, but the nomenclature (names) simply keeps adding up. Behind the scenes, nomenclature in TaxonWorks is represented by a set of nodes and edges, i.e., a mathematical graph, or network (e.g., Fig. 1). Most names (i.e., nodes in the network) are what TaxonWorks calls "protonyms," monomial epithets that are used to construct, for example, bionomial names (not to be confused with "protonym" sensu the ICZN). Protonyms are linked to other protonyms via relationships defined in NOMEN, an ontology that encodes governed rules of nomenclature. Within the system, all data, nodes and edges, can be cited, i.e., linked to a source and therefore anchored in time and tied to authorship, and annotated with a variety of annotation types (e.g., notes, confidence levels, tags). The actual building of the graphs is greatly simplified by multiple user-interfaces that allow scientists to review (e.g. Fig. 2), create, filter, and add to (again, not "change") the nomenclatural history. As in any complex knowledge-representation model, there are outlying scenarios, or edge cases that emerge, making certain human tasks more complex than others. TaxonWorks is no exception, it has limitations in terms of what and how some things can be represented. While many complex representations are hidden by simplified user-interfaces, some, for example, the handling of the ICZN's Family-group name, batch-loading of invalid relationships, and comparative syncing against external resources need more work to simplify the processes presently required to meet catalogers' needs. The depth at which TaxonWorks can capture nomenclature is only really valuable if it can be used by others. This is facilitated by the application programming interface (API) serving its data (https://api.taxonworks.org), serving text files, and by exports to standards like the emerging Catalog of Life Data Package. With reference to real-world problems, we illustrate different ways in which the API can be used, for example, as integrated into spreadsheets, through the use of command line scripts, and serve in the generation of public-facing websites. Behind all this effort are an increasing number of people recording help videos, developing documentation, and troubleshooting software and technical issues. Major contributions have come from developers at many skill levels, from high school to senior software engineers, illustrating that TaxonWorks leads in enabling both technical and domain-based contributions. The health and growth of this community is a key factor in TaxonWork's potential long-term impact in the effort to unify the names of Earth's species.
APA, Harvard, Vancouver, ISO, and other styles
46

Liljeblad, Johan, Tapani Lahti, and Matts Djos. "Linked Data Tools for Managing Taxonomic Databases." Biodiversity Information Science and Standards 3 (June 21, 2019). http://dx.doi.org/10.3897/biss.3.37329.

Full text
Abstract:
Taxonomic information is dynamic, i.e. changes are made continuously, so scientific names are insufficient to track changes in taxon circumscription. The principles of Linked Open Data (LOD), as defined by the World Wide Web Consortium, can be applied for documenting the relationships of taxon circumscriptions over time and between checklists of taxa. In our scheme, each checklist and each taxon in the checklist is assigned a globally unique, persistent identifier. According to the LOD principles, HTTP Uniform Resource Identifiers (URIs) are used as identifiers, providing both human-readable (HTML) and machine-readable (XML) responses for client requests. Common vocabularies are needed in machine-readable responses to HTTP URIs. We use SKOS (Simple Knowledge Organization System) as a basic vocabulary for describing checklists as instances of class skos:ConceptScheme, and taxa as instances of class skos:Concept. Set relationships between taxon circumscriptions are described using the properties skos:broader and skos:narrower. Darwin Core vocabulary is used for describing taxon properties, such as scientific names, taxonomic ranks and authorship string, in the checklists. Instead of directly linking taxon circumscriptions between checklists, we define a HTTP URI for each unique circumscription. This common identifier is then mapped to internal checklist identifiers matching the circumscription using the property skos:exactMatch. For the management of the URIs, the domain name TAXONID.ORG has been registered. In a pilot study, our approach has been applied to linking taxon circumscriptions of selected taxa between the national checklists of Sweden and Finland. In the future, national checklists from other Nordic/Baltic countries (Norway, Denmark, Iceland, Estonia) can be easily linked together as well. The work is part of the NeIC DeepDive project (neic.no).
APA, Harvard, Vancouver, ISO, and other styles
47

Herrmann, Dominik. "Privacy issues in the Domain Name System and techniques for self-defense." it - Information Technology 57, no. 6 (January 28, 2015). http://dx.doi.org/10.1515/itit-2015-0038.

Full text
Abstract:
AbstractThere is a growing interest in retaining and analyzing metadata. Motivated by this trend, the dissertation studies the potential for surveillance via the Domain Name System, an important infrastructure service on the Internet. Three fingerprinting techniques are developed and evaluated. The first technique allows a resolver to infer the URLs of the websites a user visits by matching characteristic patterns in DNS requests. The second technique allows to determine operating system and browser of a user based on behavioral features. Thirdly, and most importantly, it is demonstrated that the activities of users can be tracked over multiple sessions, even when the IP address of the user changes over time. In addition, the dissertation considers possible countermeasures. Obfuscating the desired hostnames by sending a large number of dummy requests (so-called range queries), turns out to be less effective than believed so far. Therefore, more appropriate techniques (mixing queries, pushing popular queries, and extended caching of results) are proposed in the thesis. The results raise awareness for an overlooked threat that infringes the privacy of Internet users, but they also contribute to the development of more usable and more effective privacy enhancing technologies.
APA, Harvard, Vancouver, ISO, and other styles
48

Ян Коваленко. "EXTRAJUDICIAL REMEDIES FOR THE RIGHT TO A DOMAIN NAME." Theory and Practice of Intellectual Property, no. 3 (June 30, 2020). http://dx.doi.org/10.33731/32020.216583.

Full text
Abstract:
Today, each member of the business industry in one way or another presents his company, his product or services he provides on the World Wide Web. The main purpose of the company in the internet is to create and use its own website to provide information about their product and to find a potential buyer, as well as demonstrate their advantages and how they differ from their competitors. The main purpose of registeringa specific «domain name» is to create favorable (convenient) conditions for the buyer who wanted to get acquainted with the company or its range. After all, if a domain name is associatively similar to the name of an individual company, then it will be much easier for the buyer to find the website of such a company than through a search using search engines.Business representatives do not always succeed, especially if the name (trademark, brand) of the company is widely known, or if such a company has become a «victim» of unfair competition. This creates controversy over which parties are interested in resolving as quickly and cheaply as possible.This is greatly facilitated by the work of the World Intellectual Property Organization and a number of documents, among which the leading are «Principles for Dispute Resolution on Identical Domain Names» and «Principles of Uniform Rules for Dispute Resolution on Domain Names» (UDRP), adopted by ICANN. However, their analysis, as well as the analysis of law enforcement practice, allow us to speak not only about the effectiveness, but also about certain shortcomings of the proposed ICANN procedure for resolving disputes over domain names, which entail ambiguous dispute resolution practices.The analysis of the regulation of protection of rights to the domain name is carried out with the prescriptions of the «Principles for the resolution of disputes about the same domain names» and the «Uniform Domain Name Dispute Resolution Rules» (UDRP) adopted by ICANN. An attempt is made to highlight the advantages and disadvantages of an out-of-court procedure for resolving disputes over domain names and suggested possible ways to improve such a system in Ukraine.
APA, Harvard, Vancouver, ISO, and other styles
49

Hyder, Muhammad Faraz, and Muhammad Ali Ismail. "Toward Domain Name System privacy enhancement using intent‐based Moving Target Defense framework over software defined networks." Transactions on Emerging Telecommunications Technologies, June 3, 2021. http://dx.doi.org/10.1002/ett.4318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Heikkinen, Mikko, Anniina Kuusijärvi, Ville-Matti Riihikoski, and Leif Schulman. "Multi-domain Collection Management Simplified — the Finnish National Collection Management System Kotka." Biodiversity Information Science and Standards 4 (October 9, 2020). http://dx.doi.org/10.3897/biss.4.59119.

Full text
Abstract:
Many natural history museums share a common problem: a multitude of legacy collection management systems (CMS) and the difficulty of finding a new system to replace them. Kotka is a CMS developed starting in 2011 at the Finnish Museum of Natural History (Luomus) and Finnish Biodiversity Information Facility (FinBIF) (Heikkinen et al. 2019, Schulman et al. 2019) to solve this problem. It has grown into a national system used by all natural history museums in Finland, and currently contains over two million specimens from several domains (zoological, botanical, paleontological, microbial, tissue sample and botanic garden collections). Kotka is a web application where data can be entered, edited, searched and exported through a browser-based user interface. It supports designing and printing specimen labels, handling collection metadata and specimen transactions, and helps support Nagoya protocol compliance. Creating a shared system for multiple institutions and collection types is difficult due to differences in their current processes, data formats, future needs and opinions. The more independent actors there are involved, the more complicated the development becomes. Successful development requires some trade-offs. Kotka has chosen features and development principles that emphasize fast development into a multitude of different purposes. Kotka was developed using agile methods with a single person (a product owner) making development decisions, based on e.g., strategic objectives, customer value and user feedback. Technical design emphasizes efficient development and usage over completeness and formal structure of the data. It applies simple and pragmatic approaches and improves collection management by providing practical tools for the users. In these regards, Kotka differs in many ways from a traditional CMS. Kotka stores data in a mostly denormalized free text format and uses a simple hierarchical data model. This allows greater flexibility and makes it easy to add new data fields and structures based on user feedback. Data harmonization and quality assurance is a continuous process, instead of doing it before entering data into the system. For example, specimen data with a taxon name can be entered into Kotka before the taxon name has been entered into the accompanying FinBIF taxonomy database. Example: simplified data about two specimens in Kotka, which have not been fully harmonized yet. Taxon: Corvus corone cornix Country: FI Collector: Doe, John Coordinates: 668, 338 Coordinate system: Finnish uniform coordinate system Taxon: Corvus corone cornix Country: FI Collector: Doe, John Coordinates: 668, 338 Coordinate system: Finnish uniform coordinate system Taxon: Corvus cornix Country: Finland Collector: Doe, J. Coordinates: 60.2442, 25,7201 Coordinate system: WGS84 Taxon: Corvus cornix Country: Finland Collector: Doe, J. Coordinates: 60.2442, 25,7201 Coordinate system: WGS84 Kotka’s data model does not follow standards, but has grown organically to reflect practical needs from the users. This is true particularly of data collected in research projects, which are often unique and complicated (e.g. complex relationships between species), requiring new data fields and/or storing data as free text. The majority of the data can be converted into simplified standard formats (e.g. Darwin Core) for sharing. The main challenge with this has been vague definitions of many data sharing formats (e.g. Darwin Core, CETAF Specimen Preview Profile (CETAF 2020), allowing different interpretations. Kotka trusts its users: it places very few limitations on what users can do, and has very simple user role management. Kotka stores the full history of all data, which allows fixing any possible errors and prevents data loss. Kotka is open source software, but is tightly coupled with the infrastructure of the Finnish Biodiversity Information Facility (FinBIF). Currently, it is only offered as an online service (Software as a Service) hosted by FinBIF. However, it could be developed into a more modular system that could, for example, utilize multiple different database backends and taxonomy data sources.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography