Journal articles on the topic 'Remote Direct Memory Acce'

To see the other types of publications on this topic, follow the link: Remote Direct Memory Acce.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Remote Direct Memory Acce.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Wei, Songping Yu, and Zhiying Wang. "Fast In-Memory Key–Value Cache System with RDMA." Journal of Circuits, Systems and Computers 28, no. 05 (May 2019): 1950074. http://dx.doi.org/10.1142/s0218126619500749.

Full text
Abstract:
The quick advances of Cloud and the advent of Fog computing impose more and more critical demand for computing and data transfer of low latency onto the underlying distributed computing infrastructure. Remote direct memory access (RDMA) technology has been widely applied for its low latency of remote data access. However, RDMA gives rise to a host of challenges in accelerating in-memory key–value stores, such as direct remote memory writes, making the remote system more vulnerable. This study presents an in-memory key–value system based on RDMA, named Craftscached, which enables: (1) buffering remote memory writes into a communication cache memory to eliminate direct remote memory writes to the data memory area; (2) dividing the communication cache memory into RDMA-writable and RDMA-readable memory zones to reduce the possibility of data corruption due to stray memory writes and caching data into an RDMA-readable memory zone to improve the remote memory read performance; and (3) adopting remote out-of-place direct memory write to achieve high performance of remote read and write. Experimental results in comparison with Memcached indicate that Craftscached provides a far better performance: (1) in the case of read-intensive workloads, the data access of Craftscached is about 7–43[Formula: see text] and 18–72.4% better than those of TCP/IP-based and RDMA-based Memcached, respectively; (2) the memory utilization of small objects is more efficient with only about 3.8% memory compaction overhead.
APA, Harvard, Vancouver, ISO, and other styles
2

Nyrkov, Anatoliy, Konstantin Ianiushkin, Andrey Nyrkov, Yulia Romanova, and Vagiz Gaskarov. "Data structures access model for remote shared memory." E3S Web of Conferences 244 (2021): 07001. http://dx.doi.org/10.1051/e3sconf/202124407001.

Full text
Abstract:
Recent achievements in high-performance computing significantly narrow the performance gap between single and multi-node computing, and open up opportunities for systems with remote shared memory. The combination of in-memory storage, remote direct memory access and remote calls requires rethinking how data organized, protected and queried in distributed systems. Reviewed models let us implement new interpretations of distributed algorithms allowing us to validate different approaches to avoid race conditions, decrease resource acquisition or synchronization time. In this paper, we describe the data model for mixed memory access with analysis of optimized data structures. We also provide the result of experiments, which contain a performance comparison of data structures, operating with different approaches, evaluate the limitations of these models, and show that the model does not always meet expectations. The purpose of this paper to assist developers in designing data structures that will help to achieve architectural benefits or improve the design of existing distributed system.
APA, Harvard, Vancouver, ISO, and other styles
3

Niki, Kazuhisa, and Jing Luo. "An fMRI Study on the Time-Limited Role of the Medial Temporal Lobe in Long-Term Topographical Autobiographic Memory." Journal of Cognitive Neuroscience 14, no. 3 (April 1, 2002): 500–507. http://dx.doi.org/10.1162/089892902317362010.

Full text
Abstract:
The time-limited role of the medial temporal lobe (MTL) in human long-term memory is well known. However, there is still no direct neuroimaging evidence to confirm it. In this fMRI study, nine subjects were scanned while asked to recall the places they visited more than seven years ago (remote memories); and the places they visited recently (recent memories). We observed robust and dominant MTL activity peaking in the left parahippocampal gyrus when recent memories were contrasted with remote memories. This result provided direct evidence for the time-limited role of the MTL in long-term topographical autobiographic memory. Further analysis revealed that this MTL activity was not due to the fact that the retrieval of recent memories was accompanied by more details. When detailed recent memories were contrasted with detailed remote memories, there was still MTL activity peaking in the left parahippocampal gyrus. The effects of details in remote memories are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Hemmatpour, Masoud, Bartolomeo Montrucchio, and Maurizio Rebaudengo. "Communicating Efficiently on Cluster-Based Remote Direct Memory Access (RDMA) over InfiniBand Protocol." Applied Sciences 8, no. 11 (October 24, 2018): 2034. http://dx.doi.org/10.3390/app8112034.

Full text
Abstract:
Distributed systems are commonly built under the assumption that the network is the primary bottleneck, however this assumption no longer holds by emerging high-performance RDMA enabled protocols in datacenters. Designing distributed applications over such protocols requires a fundamental rethinking in communication components in comparison with traditional protocols (i.e., TCP/IP). In this paper, communication paradigms in existing systems and new possible paradigms have been investigated. Advantages and drawbacks of each paradigm have been comprehensively analyzed and experimentally evaluated. The experimental results show that writing the requests to server and reading the response presents up to 10 times better performance comparing to other communication paradigms. To further expand the investigation, the proposed communication paradigm has been substituted in a real-world distributed application, and the performance has been enhanced up to seven times.
APA, Harvard, Vancouver, ISO, and other styles
5

Rybintsev, Vladimir O. "Estimating the Performance of Computing Clusters without Accelerators Based on TOP500 Results." Mathematics 10, no. 19 (September 30, 2022): 3580. http://dx.doi.org/10.3390/math10193580.

Full text
Abstract:
Based on an analysis of TOP500 results, a functional dependence of the performance of clusters without accelerators according to the Linpack benchmark on their parameters was determined. The comparison of calculated and tested results showed that the estimation error does not exceed 2% for processors of different generations and manufacturers (Intel, AMD, Fujitsu) with different technologies of a system interconnect. The achieved accuracy of the calculation allows successful prediction of the performance of a cluster when its parameters (node performance, number of nodes, number of network interfaces, network technology, remote direct memory access, or remote direct memory access over converged Ethernet mode) are changed without resorting to a complex procedure of real testing.
APA, Harvard, Vancouver, ISO, and other styles
6

Shou, Qinghui, Koichiro Uto, Wei-Chih Lin, Takao Aoyagi, and Mitsuhiro Ebara. "Near-Infrared-Irradiation-Induced Remote Activation of Surface Shape-Memory to Direct Cell Orientations." Macromolecular Chemistry and Physics 215, no. 24 (September 29, 2014): 2473–81. http://dx.doi.org/10.1002/macp.201400353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Bohong, Youmin Chen, Qing Wang, Youyou Lu, and Jiwu Shu. "Octopus + : An RDMA-Enabled Distributed Persistent Memory File System." ACM Transactions on Storage 17, no. 3 (August 31, 2021): 1–25. http://dx.doi.org/10.1145/3448418.

Full text
Abstract:
Non-volatile memory and remote direct memory access (RDMA) provide extremely high performance in storage and network hardware. However, existing distributed file systems strictly isolate file system and network layers, and the heavy layered software designs leave high-speed hardware under-exploited. In this article, we propose an RDMA-enabled distributed persistent memory file system, Octopus + , to redesign file system internal mechanisms by closely coupling non-volatile memory and RDMA features. For data operations, Octopus + directly accesses a shared persistent memory pool to reduce memory copying overhead, and actively fetches and pushes data all in clients to rebalance the load between the server and network. For metadata operations, Octopus + introduces self-identified remote procedure calls for immediate notification between file systems and networking, and an efficient distributed transaction mechanism for consistency. Octopus + is enabled with replication feature to provide better availability. Evaluations on Intel Optane DC Persistent Memory Modules show that Octopus + achieves nearly the raw bandwidth for large I/Os and orders of magnitude better performance than existing distributed file systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Gerstenberger, Robert, Maciej Besta, and Torsten Hoefler. "Enabling Highly-Scalable Remote Memory Access Programming with MPI-3 One Sided." Scientific Programming 22, no. 2 (2014): 75–91. http://dx.doi.org/10.1155/2014/571902.

Full text
Abstract:
Modern interconnects offer remote direct memory access (RDMA) features. Yet, most applications rely on explicit message passing for communications albeit their unwanted overheads. The MPI-3.0 standard defines a programming interface for exploiting RDMA networks directly, however, it's scalability and practicability has to be demonstrated in practice. In this work, we develop scalable bufferless protocols that implement the MPI-3.0 specification. Our protocols support scaling to millions of cores with negligible memory consumption while providing highest performance and minimal overheads. To arm programmers, we provide a spectrum of performance models for all critical functions and demonstrate the usability of our library and models with several application studies with up to half a million processes. We show that our design is comparable to, or better than UPC and Fortran Coarrays in terms of latency, bandwidth and message rate. We also demonstrate application performance improvements with comparable programming complexity.
APA, Harvard, Vancouver, ISO, and other styles
9

Ziegler, Tobias, Viktor Leis, and Carsten Binnig. "RDMA Communciation Patterns." Datenbank-Spektrum 20, no. 3 (September 29, 2020): 199–210. http://dx.doi.org/10.1007/s13222-020-00355-7.

Full text
Abstract:
Abstract Remote Direct Memory Access (RDMA) is a networking protocol that provides high bandwidth and low latency accesses to a remote node’s main memory. Although there has been much work around RDMA, such as building libraries on top of RDMA or even applications leveraging RDMA, it remains a hard problem to identify the most suitable RDMA primitives and their combination for a given problem. While there have been some initial studies included in papers that aim to investigate selected performance characteristics of particular design choices, there has not been a systematic study to evaluate the communication patterns of scale-out systems. In this paper, we address this issue by systematically investigating how to efficiently use RDMA for building scale-out systems.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Helong, Dingtao Shen, Wenlong Chen, Yiheng Liu, Yueping Xu, and Debao Tan. "Run-Length-Based River Skeleton Line Extraction from High-Resolution Remote Sensed Image." Remote Sensing 14, no. 22 (November 18, 2022): 5852. http://dx.doi.org/10.3390/rs14225852.

Full text
Abstract:
Automatic extraction of the skeleton lines of river systems from high-resolution remote-sensing images has great significance for surveying and managing water resources. A large number of existing methods for the automatic extraction of skeleton lines from raster images are primarily used for simple graphs and images (e.g., fingerprint, text, and character recognition). These methods generally are memory intensive and have low computational efficiency. These shortcomings preclude their direct use in the extraction of skeleton lines from large volumes of high-resolution remote-sensing images. In this study, we developed a method to extract river skeleton lines based entirely on run-length encoding. This method attempts to replace direct raster encoding with run-length encoding for storing river data, which can considerably compress raster data. A run-length boundary tracing strategy is used instead of complete raster matrix traversal to quickly determine redundant pixels, thereby significantly improving the computational efficiency. An experiment was performed using a 0.5 m-resolution remote-sensing image of Yiwu city in the Chinese province of Zhejiang. Raster data for the rivers in Yiwu were obtained using both the DeepLabv3+ deep learning model and the conventional visual interpretation method. Subsequently, the proposed method was used to extract the skeleton lines of the rivers in Yiwu. To compare the proposed method with the classical raster-based skeleton line extraction algorithm developed by Zhang and Suen in terms of memory consumption and computational efficiency, the visually interpreted river data were used to generate skeleton lines at different raster resolutions. The results showed that the proposed method consumed less than 1% of the memory consumed by the classical method and was over 10 times more computationally efficient. This finding suggests that the proposed method has the potential for river skeleton line extraction from terabyte-scale remote-sensing image data on personal computers.
APA, Harvard, Vancouver, ISO, and other styles
11

Wei, Xingda, Rong Chen, Haibo Chen, and Binyu Zang. "XStore : Fast RDMA-Based Ordered Key-Value Store Using Remote Learned Cache." ACM Transactions on Storage 17, no. 3 (August 31, 2021): 1–32. http://dx.doi.org/10.1145/3468520.

Full text
Abstract:
RDMA ( Remote Direct Memory Access ) has gained considerable interests in network-attached in-memory key-value stores. However, traversing the remote tree-based index in ordered key-value stores with RDMA becomes a critical obstacle, causing an order-of-magnitude slowdown and limited scalability due to multiple round trips. Using index cache with conventional wisdom—caching partial data and traversing them locally—usually leads to limited effect because of unavoidable capacity misses, massive random accesses, and costly cache invalidations. We argue that the machine learning (ML) model is a perfect cache structure for the tree-based index, termed learned cache . Based on it, we design and implement XStore , an RDMA-based ordered key-value store with a new hybrid architecture that retains a tree-based index at the server to perform dynamic workloads (e.g., inserts) and leverages a learned cache at the client to perform static workloads (e.g., gets and scans). The key idea is to decouple ML model retraining from index updating by maintaining a layer of indirection from logical to actual positions of key-value pairs. It allows a stale learned cache to continue predicting a correct position for a lookup key. XStore ensures correctness using a validation mechanism with a fallback path and further uses speculative execution to minimize the cost of cache misses. Evaluations with YCSB benchmarks and production workloads show that a single XStore server can achieve over 80 million read-only requests per second. This number outperforms state-of-the-art RDMA-based ordered key-value stores (namely, DrTM-Tree, Cell, and eRPC+Masstree) by up to 5.9× (from 3.7×). For workloads with inserts, XStore still provides up to 3.5× (from 2.7×) throughput speedup, achieving 53M reqs/s. The learned cache can also reduce client-side memory usage and further provides an efficient memory-performance tradeoff, e.g., saving 99% memory at the cost of 20% peak throughput.
APA, Harvard, Vancouver, ISO, and other styles
12

de Sousa, André F., Kiriana K. Cowansage, Ipshita Zutshi, Leonardo M. Cardozo, Eun J. Yoo, Stefan Leutgeb, and Mark Mayford. "Optogenetic reactivation of memory ensembles in the retrosplenial cortex induces systems consolidation." Proceedings of the National Academy of Sciences 116, no. 17 (March 15, 2019): 8576–81. http://dx.doi.org/10.1073/pnas.1818432116.

Full text
Abstract:
The neural circuits underlying memory change over prolonged periods after learning, in a process known as systems consolidation. Postlearning spontaneous reactivation of memory-related neural ensembles is thought to mediate this process, although a causal link has not been established. Here we test this hypothesis in mice by using optogenetics to selectively reactivate neural ensembles representing a contextual fear memory (sometimes referred to as engram neurons). High-frequency stimulation of these ensembles in the retrosplenial cortex 1 day after learning produced a recent memory with features normally observed in consolidated remote memories, including higher engagement of neocortical areas during retrieval, contextual generalization, and decreased hippocampal dependence. Moreover, this effect was only present if memory ensembles were reactivated during sleep or light anesthesia. These results provide direct support for postlearning memory ensemble reactivation as a mechanism of systems consolidation, and show that this process can be accelerated by ensemble reactivation in an unconscious state.
APA, Harvard, Vancouver, ISO, and other styles
13

Hong, Da Hee, Jae Hoon Yoo, Won Ji Park, So Won Kim, Jong Hwan Kim, Sae Hoon Uhm, and Hee Chul Lee. "Characteristics of Hf0.5Zr0.5O2 Thin Films Prepared by Direct and Remote Plasma Atomic Layer Deposition for Application to Ferroelectric Memory." Nanomaterials 13, no. 5 (February 27, 2023): 900. http://dx.doi.org/10.3390/nano13050900.

Full text
Abstract:
Hf0.5Zr0.5O2 (HZO) thin film exhibits ferroelectric properties and is presumed to be suitable for use in next-generation memory devices because of its compatibility with the complementary metal–oxide–semiconductor (CMOS) process. This study examined the physical and electrical properties of HZO thin films deposited by two plasma-enhanced atomic layer deposition (PEALD) methods— direct plasma atomic layer deposition (DPALD) and remote plasma atomic layer deposition (RPALD)—and the effects of plasma application on the properties of HZO thin films. The initial conditions for HZO thin film deposition, depending on the RPALD deposition temperature, were established based on previous research on HZO thin films deposited by the DPALD method. The results show that as the measurement temperature increases, the electric properties of DPALD HZO quickly deteriorate; however, the RPALD HZO thin film exhibited excellent fatigue endurance at a measurement temperature of 60 °C or less. HZO thin films deposited by the DPALD and RPALD methods exhibited relatively good remanent polarization and fatigue endurance, respectively. These results confirm the applicability of the HZO thin films deposited by the RPALD method as ferroelectric memory devices.
APA, Harvard, Vancouver, ISO, and other styles
14

Cilardo, Alessandro. "Evaluation of HPC Acceleration and Interconnect Technologies for High-Throughput Data Acquisition." Sensors 21, no. 22 (November 22, 2021): 7759. http://dx.doi.org/10.3390/s21227759.

Full text
Abstract:
Efficient data movement in multi-node systems is a crucial issue at the crossroads of scientific computing, big data, and high-performance computing, impacting demanding data acquisition applications from high-energy physics to astronomy, where dedicated accelerators such as FPGA devices play a key role coupled with high-performance interconnect technologies. Building on the outcome of the RECIPE Horizon 2020 research project, this work evaluates the use of high-bandwidth interconnect standards, namely InfiniBand EDR and HDR, along with remote direct memory access functions for direct exposure of FPGA accelerator memory across a multi-node system. The prototype we present aims at avoiding dedicated network interfaces built in the FPGA accelerator itself, leaving most of the resources for user acceleration and supporting state-of-the-art interconnect technologies. We present the detail of the proposed system and a quantitative evaluation in terms of end-to-end bandwidth as concretely measured with a real-world FPGA-based multi-node HPC workload.
APA, Harvard, Vancouver, ISO, and other styles
15

Kalney, Marina S. "The role of humanities in the course of transmitting social memory." SHS Web of Conferences 103 (2021): 01019. http://dx.doi.org/10.1051/shsconf/202110301019.

Full text
Abstract:
The article examines the contradiction between the transformation of information technology into an integral part of contemporary educational activities and the threat of reducing the quality of the educational process and dehumanizing the individual, resulting from the implementation of digital educational technologies. The need for distance work and remote communication is particularly evident in the situation of the pandemic, which requires, on the one hand, restricting direct interpersonal contacts to prevent the spread of infection, and on the other hand – continuing labor activity to prevent an economic downturn. The disadvantages of the distance framework in the educational process concern a decrease in discipline among students, a deterioration in the quality of learning of educational material, economic inaccessibility for low-income families, insufficient coverage of territories with cellular communication, as well as housing conditions that do not allow organizing remote work and training. These shortcomings become a threat to increasing social stratification due to the impossibility of general access to even basic education. The essence of the protests against distance learning shows that the problems are seen not only in the lack of organization of the learning process but also in the very fact of abandoning the traditional learning model. One of the factors of such a threat is the absolutization of the role of information technology in the learning process, which leads to a deterioration in the level of education. To resolve this contradiction, the authors propose to consider the role of humanitarian knowledge as a meaningful aspect of the educational process.
APA, Harvard, Vancouver, ISO, and other styles
16

Campodonico, Jeffrey R., and Sharilyn Rediess. "Dissociation of implicit and explicit knowledge in a case of psychogenic retrograde amnesia." Journal of the International Neuropsychological Society 2, no. 2 (March 1996): 146–58. http://dx.doi.org/10.1017/s1355617700001004.

Full text
Abstract:
AbstractThere have been few studies of psychogenic amnesia based on a cognitive or neuropsychological framework. In the present study, a patient with acute onset of profound psychogenic retrograde amnesia was examined. Although her performance on neuropsychological tasks revealed intact anterograde memory, language functioning, visuospatial and constructional skills, and mental speed and flexibility, she displayed severe impairments on a variety of retrograde memory tasks. Furthermore, initial observations revealed inconsistencies between the patient’s recall of semantic knowledge on direct questioning and her ability to demonstrate the use of this knowledge on indirect tasks. To test this formally, we devised an indirect remote knowledge task to examine a possible dissociation between explicit and implicit memory. Two healthy subjects matched for age, gender, education, occupation, and estimated IQ were also tested. As predicted, the findings demonstrate implicit knowledge despite impaired explicit recall for the same material. (JINS, 1996, 2, 146–158.)
APA, Harvard, Vancouver, ISO, and other styles
17

Sachyani Keneth, Ela, Rama Lieberman, Matthew Rednor, Giulia Scalet, Ferdinando Auricchio, and Shlomo Magdassi. "Multi-Material 3D Printed Shape Memory Polymer with Tunable Melting and Glass Transition Temperature Activated by Heat or Light." Polymers 12, no. 3 (March 23, 2020): 710. http://dx.doi.org/10.3390/polym12030710.

Full text
Abstract:
Shape memory polymers are attractive smart materials that have many practical applications and academic interest. Three-dimensional (3D) printable shape memory polymers are of great importance for the fabrication of soft robotic devices due to their ability to build complex 3D structures with desired shapes. We present a 3D printable shape memory polymer, with controlled melting and transition temperature, composed of methacrylated polycaprolactone monomers and N-Vinylcaprolactam reactive diluent. Tuning the ratio between the monomers and the diluents resulted in changes in melting and transition temperatures by 20, and 6 °C, respectively. The effect of the diluent addition on the shape memory behavior and mechanical properties was studied, showing above 85% recovery ratio, and above 90% fixity, when the concentration of the diluent was up to 40 wt %. Finally, we demonstrated multi-material printing of a 3D structure that can be activated locally, at two different temperatures, by two different stimuli; direct heating and light irradiation. The remote light activation was enabled by utilizing a coating of Carbon Nano Tubes (CNTs) as an absorbing material, onto sections of the printed objects.
APA, Harvard, Vancouver, ISO, and other styles
18

Cai, Yan, Wenlong Xie, and Haihua Zhang. "Remote distributed monitoring system of switched reluctance motor." Measurement and Control 52, no. 3-4 (March 2019): 276–90. http://dx.doi.org/10.1177/0020294019836111.

Full text
Abstract:
The reliable operation, dynamic performance analysis and control strategy research of a switched reluctance motor (SRM) require an online monitoring system to display and record its operating status. However, due to the large amount of data, the nonlinear electromagnetic characteristics and the harsh working environment of SRM, it is very difficult to monitor a SRM’s operation status in real time. In order to solve these problems, a new structure of the SRM monitoring system, which uses Digital Signal Processor (DSP) and the hardwired Transmission Control Protocol/Internet Protocol embedded Ethernet controller W5500, is presented in this paper. The W5500 and DSP’s direct memory access modules are employed for data capture and transfer to reduce the digital signal processing workload. The digital signal processing program is implemented by the hybrid programming method, which shortens the filtering time. Consequently, the DSP has sufficient resources to acquire multiple signals with a high sampling frequency and can adopt a more complex filtering algorithm, which enhances the accuracy and real-time performance of the system. Moreover, the amplitude–frequency characteristics of signals are analyzed. Then, the detection circuits and finite impulse response filters are designed to achieve the targeted acquisition and filtering. Besides, the impact of harsh environment on the system is reduced by adjusting the data transmission modes according to different working conditions. As a result, the scope of application of the system has been extended. The proposed system has a novel structure and strong practicability, which exhibits great guiding significance for the development of a SRM.
APA, Harvard, Vancouver, ISO, and other styles
19

Jang, Hankook, Sang-Hwa Chung, and Dae-Hyun Yoo. "Design and implementation of a protocol offload engine for TCP/IP and remote direct memory access based on hardware/software coprocessing." Microprocessors and Microsystems 33, no. 5-6 (August 2009): 333–42. http://dx.doi.org/10.1016/j.micpro.2009.03.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

He, Chun Lin. "The Design and Implementation of the Information Remote Monitoring and Security Management System Based on Internet." Advanced Materials Research 846-847 (November 2013): 1414–17. http://dx.doi.org/10.4028/www.scientific.net/amr.846-847.1414.

Full text
Abstract:
With the development of economy, the existence and application of the inflammable, explosive, toxic and harmful gases irresistible but dangerous at the same time in the industrial production application. for the sake of minimizing potential dangers, the remote security monitoring is of great significance. With the rapid development of network technology, the Internet network technology to implement the remote monitoring can effectively avoid some potential hazards in the industrial production and other dangerous things. We can draw a conclusion from the feedback that the effect of user experience is not ideal in the actual application. So based on the analysis of the traditional structure of remote monitoring system, a new remote monitoring system is created which can avoid these shortcomings through further research. This new system consists of three parts: 1. The client: using Flex rich client technology is more convenient and suitable for communication interface and let the customer feel familiar and easy to use. 2. data management layer: it contains three parts : the Web server layer, application server and database server layer . Application server promotes system real-time performance through Web Service technology, Shared memory and streaming Socket technology, which enables clients to have direct access to the site. 3. Data acquisition layer: data will be collected in a timely manner to send to data management with GPRS technology.
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Hongzhi, Changji Li, Chenguang Zheng, Chenghuan Huang, Juncheng Fang, James Cheng, and Jian Zhang. "G-tran." Proceedings of the VLDB Endowment 15, no. 11 (July 2022): 2545–58. http://dx.doi.org/10.14778/3551793.3551813.

Full text
Abstract:
Graph transaction processing poses unique challenges such as random data access due to the irregularity of graph structures, low throughput and high abort rate due to the relatively large read/write sets in graph transactions. To address these challenges, we present G-Tran, a remote direct memory access (RDMA)-enabled distributed in-memory graph database with serializable and snapshot isolation support. First, we propose a graph-native data store to achieve good data locality and fast data access for transactional updates and queries. Second, G-Tran adopts a fully decentralized architecture that leverages RDMA to process distributed transactions with the massively parallel processing (MPP) model, which can achieve high performance by utilizing all computing resources. In addition, we propose a new multi-version optimistic concurrency control (MV-OCC) protocol with two optimizations to address the issue of large read/write sets in graph transactions. Extensive experiments show that G-Tran achieves competitive performance compared with other popular graph databases on benchmark workloads.
APA, Harvard, Vancouver, ISO, and other styles
22

Krawczyk, Rafał Dominik, Tommaso Colombo, Niko Neufeld, Flavio Pisani, and Sébastien Valat. "Feasibility tests of RoCE v2 for LHCb event building." EPJ Web of Conferences 245 (2020): 01011. http://dx.doi.org/10.1051/epjconf/202024501011.

Full text
Abstract:
This paper evaluates the utilization of Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) for the Run 3 LHCb event building at CERN. The acquisition system of the detector will collect partial data from approximately 1000 separate detector streams. The total estimated throughput equals 32 Terabits per second. Full events will be assembled for subsequent processing and data selection in the filtering farm of the online trigger. High-throughput transmissions with up to 90% links utilization will be an essential feature of the system. The data exchange mechanism must support zero-copy transmissions. In this work, the RoCE high-throughput kernel bypass Ethernet protocol is benchmarked as a potential alternative to InfiniBand. A RoCE-based event building network is presented and two implementations are considered. The former variant combined shallow-buffered and deep-buffered switches with enabled flow control. In the latter setup, only deep-buffered devices are used, where operation relied on their memory throughput and capacity. Feasibility tests were conducted with selected Ethernet switches. Memory bandwidth utilization was investigated, in comparison with InfiniBand. Relevant utilization and interoperability issues of RoCE flow control are detailed with lessons learned along the road.
APA, Harvard, Vancouver, ISO, and other styles
23

KEE, YANGSUK, and SOONHOI HA. "AN EFFICIENT IMPLEMENTATION OF THE BSP PROGRAMMING LIBRARY FOR VIA." Parallel Processing Letters 12, no. 01 (March 2002): 65–77. http://dx.doi.org/10.1142/s0129626402000835.

Full text
Abstract:
Virtual Interface Architecture(VIA) is a light-weight protocol for protected user-level zero-copy communication. In spite of the promised high performance of VIA, previous MPI implementations for GigaNet's cLAN revealed low communication performance. Two main sources of such low performance are the discrepancy in the communication model between MPI and VIA and the multi-threading overhead. In this paper, we propose a new implementation of the Bulk Synchronous Parallel(BSP) programming library for VIA called xBSP to overcome such problems. To the best of our knowledge, xBSP is the first implementation of the BSP library for VIA. xBSP demonstrates that the selection of a proper library is important to exploit the features of light-weight protocols. Intensive use of Remote Direct Memory Access(RDMA) operations leads to high performance close to the native VIA performance with respect to round trip delay and bandwidth. Considering the effects of multi-threading, memory registration, and completion policy on performance, we could obtain an efficient BSP implementation for cLAN, which was confirmed by experimental results.
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Lin, Jian Liang, Min Weng, and Haihong Zhu. "A Multiple-Feature Reuse Network to Extract Buildings from Remote Sensing Imagery." Remote Sensing 10, no. 9 (August 24, 2018): 1350. http://dx.doi.org/10.3390/rs10091350.

Full text
Abstract:
Automatic building extraction from remote sensing imagery is important in many applications. The success of convolutional neural networks (CNNs) has also led to advances in using CNNs to extract man-made objects from high-resolution imagery. However, the large appearance and size variations of buildings make it difficult to extract both crowded small buildings and large buildings. High-resolution imagery must be segmented into patches for CNN models due to GPU memory limitations, and buildings are typically only partially contained in a single patch with little context information. To overcome the problems involved when using different levels of image features with common CNN models, this paper proposes a novel CNN architecture called a multiple-feature reuse network (MFRN) in which each layer is connected to all the subsequent layers of the same size, enabling the direct use of the hierarchical features in each layer. In addition, the model includes a smart decoder that enables precise localization with less GPU load. We tested our model on a large real-world remote sensing dataset and obtained an overall accuracy of 94.5% and an 85% F1 score, which outperformed the compared CNN models, including a 56-layer fully convolutional DenseNet with 93.8% overall accuracy and an F1 score of 83.5%. The experimental results indicate that the MFRN approach to connecting convolutional layers improves the performance of common CNN models for extracting buildings of different sizes and can achieve high accuracy with a consumer-level GPU.
APA, Harvard, Vancouver, ISO, and other styles
25

Saeed, Dr Abdul Razzaq Ahmed. "Geographical and modern technologies." ALUSTATH JOURNAL FOR HUMAN AND SOCIAL SCIENCES 216, no. 2 (March 1, 2016): 29–38. http://dx.doi.org/10.36473/ujhss.v216i2.589.

Full text
Abstract:
It is the modern techniques that are used in modern geographical science in scientific applications three Systems is a technology GIS Gis (Geograbhical InFormation System), sensor system technology remote R.S (Remoote Sensing), GPS system technology (Global Positioning System) These three systems contributed to the great scientific revolution in all geographic modern science and its applications, as the GIS GIS technology is a way to organize or style of geographical and non-geographical information by computer and linked to geographical their positions depending on the specific coordinates. Coordinates are therefore a way to link the geographical phenomena scattered on the surface of the ground coordinates of the system and stored in computer memory and link the metadata associated with these phenomena through a database and analyzed and reflected a specific scale, and then print them The sensor system technology remote RS and its use in modern applications in geographical science is represented a set of processes that allow access to information for some geographical characteristics of the phenomena on the surface of the earth without no direct contact between the geographical phenomenon and the sensor (capture device information). Can be arranged remote sensors on a wide variety of platforms air or space and at different heights, turning the initial information received by the sensor either to directly usable products such as photographs air or space visualizations or store this information in a private devices can refer to it when needed in the future
APA, Harvard, Vancouver, ISO, and other styles
26

Ortega, Sterling B., Vanessa O. Torres, Sarah E. Latchney, Cody W. Whoolery, Ibrahim Z. Noorbhai, Katie Poinsatte, Uma M. Selvaraj, et al. "B cells migrate into remote brain areas and support neurogenesis and functional recovery after focal stroke in mice." Proceedings of the National Academy of Sciences 117, no. 9 (February 12, 2020): 4983–93. http://dx.doi.org/10.1073/pnas.1913292117.

Full text
Abstract:
Lymphocytes infiltrate the stroke core and penumbra and often exacerbate cellular injury. B cells, however, are lymphocytes that do not contribute to acute pathology but can support recovery. B cell adoptive transfer to mice reduced infarct volumes 3 and 7 d after transient middle cerebral artery occlusion (tMCAo), independent of changing immune populations in recipient mice. Testing a direct neurotrophic effect, B cells cocultured with mixed cortical cells protected neurons and maintained dendritic arborization after oxygen-glucose deprivation. Whole-brain volumetric serial two-photon tomography (STPT) and a custom-developed image analysis pipeline visualized and quantified poststroke B cell diapedesis throughout the brain, including remote areas supporting functional recovery. Stroke induced significant bilateral B cell diapedesis into remote brain regions regulating motor and cognitive functions and neurogenesis (e.g., dentate gyrus, hypothalamus, olfactory areas, cerebellum) in the whole-brain datasets. To confirm a mechanistic role for B cells in functional recovery, rituximab was given to human CD20+(hCD20+) transgenic mice to continuously deplete hCD20+-expressing B cells following tMCAo. These mice experienced delayed motor recovery, impaired spatial memory, and increased anxiety through 8 wk poststroke compared to wild type (WT) littermates also receiving rituximab. B cell depletion reduced stroke-induced hippocampal neurogenesis and cell survival. Thus, B cell diapedesis occurred in areas remote to the infarct that mediated motor and cognitive recovery. Understanding the role of B cells in neuronal health and disease-based plasticity is critical for developing effective immune-based therapies for protection against diseases that involve recruitment of peripheral immune cells into the injured brain.
APA, Harvard, Vancouver, ISO, and other styles
27

Sheets, Payson. "PILGRIMAGES AND PERSISTENT SOCIAL MEMORY IN SPITE OF VOLCANIC DISASTERS IN THE ARENAL AREA, COSTA RICA." Ancient Mesoamerica 22, no. 2 (2011): 425–35. http://dx.doi.org/10.1017/s0956536111000265.

Full text
Abstract:
AbstractAncient Costa Ricans in the Arenal area exhibited extraordinary persistence in landscape use and social memory, in spite of repeated catastrophes caused by explosive volcanic eruptions. The Cañales village on the south shore of Lake Arenal was struck by two large explosive eruptions during the Arenal phase (500 b.c.–a.d. 600). Following ecological recovery, the village was reoccupied after each of these eruptions. I argue that the people who reoccupied the village were direct descendants of pre-disaster villagers due to the fact that they reinstated use of the same path to the village cemetery. While previous interpretations emphasized ecological reasons for village reoccupation, I suggest that a dominating reason for reoccupation was to re-establish contact with the spirits of deceased ancestors in the cemetery. The living and the spirits of the deceased constituted the functioning community. The refugees re-established processional access to their cemetery as soon as possible, perhaps even before the village was reoccupied. Archaeologists rarely discover evidence of ancient pilgrimages. However, the combination of remote sensing and detailed stratigraphic analyses allow them to be detected in the Arenal area. Villagers created and perpetuated social memory by regular linear ritual processions along precisely the same path, in spite of challenging topography and occasional regional disasters obscuring the path. This recognition has implications for the arguments of sedentism versus residential mobility during the Arenal phase.
APA, Harvard, Vancouver, ISO, and other styles
28

Ye, Yuejin, Zhenya Song, Shengchang Zhou, Yao Liu, Qi Shu, Bingzhuo Wang, Weiguo Liu, Fangli Qiao, and Lanning Wang. "swNEMO_v4.0: an ocean model based on NEMO4 for the new-generation Sunway supercomputer." Geoscientific Model Development 15, no. 14 (July 25, 2022): 5739–56. http://dx.doi.org/10.5194/gmd-15-5739-2022.

Full text
Abstract:
Abstract. The current large-scale parallel barrier of ocean general circulation models (OGCMs) makes it difficult to meet the computing demand of high resolution. Fully considering both the computational characteristics of OGCMs and the heterogeneous many-core architecture of the new Sunway supercomputer, swNEMO_v4.0, based on NEMO4 (Nucleus for European Modelling of the Ocean version 4), is developed with ultrahigh scalability. Three innovations and breakthroughs are shown in our work: (1) a highly adaptive, efficient four-level parallelization framework for OGCMs is proposed to release a new level of parallelism along the compute-dependency column dimension. (2) A many-core optimization method using blocking by remote memory access (RMA) and a dynamic cache scheduling strategy is applied, effectively utilizing the temporal and spatial locality of data. The test shows that the actual direct memory access (DMA) bandwidth is greater than 90 % of the ideal band-width after optimization, and the maximum is up to 95 %. (3) A mixed-precision optimization method with half, single and double precision is explored, which can effectively improve the computation performance while maintaining the simulated accuracy of OGCMs. The results demonstrate that swNEMO_v4.0 has ultrahigh scalability, achieving up to 99.29 % parallel efficiency with a resolution of 500 m using 27 988 480 cores, reaching the peak performance with 1.97 PFLOPS.
APA, Harvard, Vancouver, ISO, and other styles
29

Krawczyk, Rafał Dominik, Flavio Pisani, Tommaso Colombo, Markus Frank, and Niko Neufeld. "Ethernet evaluation in data distribution traffic for the LHCb filtering farm at CERN." EPJ Web of Conferences 251 (2021): 04001. http://dx.doi.org/10.1051/epjconf/202125104001.

Full text
Abstract:
This paper evaluates the real-time distribution of data over Ethernet for the upgraded LHCb data acquisition cluster at CERN. The system commissioning ends in 2021 and its total estimated input throughput is 32 Terabits per second. After the events are assembled, they must be distributed for further data selection to the filtering farm of the online trigger. High-throughput and very low overhead transmissions will be an essential feature of such a system. In this work RoCE (Remote Direct Memory Access over Converged Ethernet) high-throughput Ethernet protocol and Ethernet flow control algorithms have been used to implement lossless event distribution. To generate LHCb-like traffic, a custom benchmark has been implemented. It was used to stress-test the selected Ethernet networks and to check resilience to uneven workload distribution. Performance tests were made with selected evaluation clusters. 100 Gb/s and 25 Gb/s links were used. Performance results and overall evaluation of this Ethernet-based approach are discussed.
APA, Harvard, Vancouver, ISO, and other styles
30

Geetha J., Uday Bhaskar N, and Chenna Reddy P. "An Analytical Approach for Optimizing the Performance of Hadoop Map Reduce Over RoCE." International Journal of Information Communication Technologies and Human Development 10, no. 2 (April 2018): 1–14. http://dx.doi.org/10.4018/ijicthd.2018040101.

Full text
Abstract:
Data intensive systems aim to efficiently process “big” data. Several data processing engines have evolved over past decade. These data processing engines are modeled around the MapReduce paradigm. This article explores Hadoop's MapReduce engine and propose techniques to obtain a higher level of optimization by borrowing concepts from the world of High Performance Computing. Consequently, power consumed and heat generated is lowered. This article designs a system with a pipelined dataflow in contrast to the existing unregulated “bursty” flow of network traffic, the ability to carry out both Map and Reduce tasks in parallel, and a system which incorporates modern high-performance computing concepts using Remote Direct Memory Access (RDMA). To establish the claim of an increased performance measure of the proposed system, the authors provide an algorithm for RoCE enabled MapReduce and a mathematical derivation contrasting the runtime of vanilla Hadoop. This article proves mathematically, that the proposed system functions 1.67 times faster than the vanilla version of Hadoop.
APA, Harvard, Vancouver, ISO, and other styles
31

Barghash, Ahmad, Lina Hammad, and Ammar Gharaibeh. "Traditional vs. Modern Data Paths: A Comprehensive Survey." Computers 11, no. 9 (August 31, 2022): 132. http://dx.doi.org/10.3390/computers11090132.

Full text
Abstract:
Recently, many new network paths have been introduced while old paths are still in use. The trade-offs remain vague and should be further addressed. Since last decade, the Internet is playing a major role in people’s lives, and the demand on the Internet in all fields has increased rapidly. In order to get a fast and secure connection to the Internet, the networks providing the service should get faster and more reliable. Many network data paths have been proposed in order to achieve the previous objectives since the 1970s. It started with the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) and later followed by several more modern paths including Quick UDP Internet Connections (QUIC), remote direct memory access (RDMA), and the Data Plane Development Kit (DPDK). This raised the question on which data path should be adopted and based on which features. In this work, we try to answer this question using different perspectives such as the protocol techniques, latency and congestion control, head of line blocking, the achieved throughput, middleboxes consideration, loss recovery mechanisms, developer productivity, host resources utilization and targeted application.
APA, Harvard, Vancouver, ISO, and other styles
32

Wiewiorowski, Jacek. "Christian Influence on the Roman Calendar. Comments in the Margins of C. Th. 9.35.4 = C. 3.12.5 (a. 380)1/ Wpływ chrześciaństwa na kalendarz rzymski. Uwagi na marginesie C. Th. 9.35.4 = C. 3.12.5 (a. 380)." Studia Prawnicze KUL, no. 4 (December 31, 2019): 213–33. http://dx.doi.org/10.31743/sp.10615.

Full text
Abstract:
The text analyses Christianisation of the Roman calendar in the light or the Roman imperial constitutions in the 4th century. The author first of all underlines that only humans recognise religious feasts despite that human perception of time is not that remote from the apperception of time in the case of other animals and that the belief in the supernatural/religion and rituals belong to human universals, the roots of which, together with the judiciary, are to be sought in the evolutionary past of the genus Homo. Furthermore, the author deduces that the first direct Christian influence on the Roman official calendar was probably C. Th. 9,35,4 = C. 3,12,5 (a. 380), prohibiting all investigation of criminal cases by means of torture during the forty days which anticipate the Paschal season, contesting the opinion that dies solis were regarded as dies dominicus (Christian Sunday) already in C. Th. 2,8,1 and C. 3,12,2 (a. 321). Finally, on the margin of the Polish debate concerning the limitation of legal trade during Sundays, when Constantinian roots of dies dominicus were quoted frequently and with great conviction, the limitations of politics of memory are underlined.
APA, Harvard, Vancouver, ISO, and other styles
33

Arief, Lathifah, Fajril Akbar, Nefy Puteri Novani, and Iqbal Saputra. "Pengujian Kinerja Server Portable Berbasis Single Board Computer (SBC) Dalam Mendukung Kegiatan Pembelajaran." Jurnal Nasional Teknologi dan Sistem Informasi 4, no. 2 (September 2, 2018): 98–106. http://dx.doi.org/10.25077/teknosi.v4i2.2018.98-106.

Full text
Abstract:
Dukungan fasilitas komputer atau alat bantu kegiatan hands-on lainnya jelas dibutuhkan dalam membekali keterampilan praktis bidang komputasi dan teknologi informasi kepada peserta didik. Dalam penelitian ini, diusulkan suatu portable server berbasis single board computer (SBC) dan teknologi WiFi-direct untuk menjadi alat bantu pembelajaran dengan beberapa karakteristik: (a) portable secara fisik, (b) rendah biaya, (c) dapat diakses dari berbagai platform perangkat pengguna, dan (d) tidak mengandalkan ketersediaan koneksi internet. Fokus penelitian ini adalah rancang bangun portable server tersebut dan pengujian kinerjanya ketika digunakan dalam pembelajaran hands-on. Server yang dibangun berhasil diakses oleh beberapa perangkat berbeda (dicobakan hingga 4 platform perangkat yang berbeda) baik secara terpisah maupun secara bersamaan. Pengujian kinerja yang dilakukan mencakup pengujian waktu respon sistem dan pengujian penggunaan sumber daya komputasi server. Pengujian waktu respon sistem dilakukan untuk mengetahui berapa lama waktu yang dibutuhkan dan harus dialokasikan untuk persiapan server dalam kerangka waktu kegiatan pembelajaran. Hasil yang didapatkan cukup baik, yaitu total waktu terlama sejak server booting sampai pengguna berhasil login adalah 2 menit 45 detik. Pengujian penggunaan sumber daya komputasi server pada paper kali ini difokuskan berupa baselining untuk mendapatkan gambaran penggunaan sumber daya server dalam kondisi beban minimum ketika hanya diakses oleh pengguna tunggal yang hanya mengakses satu tool pembelajaran saja. Baseline penggunaan sumberdaya komputasi server yang didapatkan meliputi penggunaan sumber daya CPU, memory dan network untuk beberapa skenario: akses melalui kabel UTP, akses melalui hotspot server, aktivitas remote shell, aktivitas remote desktop, dan akses suatu tool pembelajaran berbasis command line interface (CLI). Tindak lanjut yang diharapkan dari penelitian ini adalah pengujian waktu respon, pengujian beban, serta pengujian batas kemampuan server ketika diakses secara simultan oleh banyak pengguna yang mewakili lazimnya jumlah peserta suatu kelas.
APA, Harvard, Vancouver, ISO, and other styles
34

Wu, Shiyu, Zhichao Xu, Feng Wang, Dongkai Yang, and Gongjian Guo. "An Improved Back-Projection Algorithm for GNSS-R BSAR Imaging Based on CPU and GPU Platform." Remote Sensing 13, no. 11 (May 27, 2021): 2107. http://dx.doi.org/10.3390/rs13112107.

Full text
Abstract:
Global Navigation Satellite System Reflectometry Bistatic Synthetic Aperture Radar (GNSS-R BSAR) is becoming more and more important in remote sensing because of its low power, low mass, low cost, and real-time global coverage capability. The Back Projection Algorithm (BPA) was usually selected as the GNSS-R BSAR imaging algorithm because it can process echo signals of complex geometric configurations. However, the huge computational cost is a challenge for its application in GNSS-R BSAR. Graphics Processing Units (GPU) provides an efficient computing platform for GNSS-R BSAR processing. In this paper, a solution accelerating the BPA of GNSS-R BSAR using GPU is proposed to improve imaging efficiency, and a matching pre-processing program was proposed to synchronize direct and echo signals to improve imaging quality. To process hundreds of gigabytes of data collected by a long-time synthetic aperture in fixed station mode, a stream processing structure was used to process such a large amount of data to solve the problem of limited GPU memory. In the improvement of the imaging efficiency, the imaging task is divided into pre-processing and BPA, which are performed in the Central Processing Unit (CPU) and GPU, respectively, and a pixel-oriented parallel processing method in back projection is adopted to avoid memory access conflicts caused by excessive data volume. The improved BPA with the long synthetic aperture time is verified through the simulation of and experimenting on the GPS-L5 signal. The results show that the proposed accelerating solution is capable of taking approximately 128.04 s, which is 156 times lower than pure CPU framework for producing a size of 600 m × 600 m image with 1800 s synthetic aperture time; in addition, the same imaging quality with the existing processing solution can be retained.
APA, Harvard, Vancouver, ISO, and other styles
35

Geng, Junjie, Jinyao Yan, and Yuan Zhang. "P4QCN: Congestion Control Using P4-Capable Device in Data Center Networks." Electronics 8, no. 3 (March 2, 2019): 280. http://dx.doi.org/10.3390/electronics8030280.

Full text
Abstract:
Modern data centers aim to offer very high throughput and ultra-low latency to meet the demands of applications such as online intensive services. Traditional TCP/IP stacks cannot meet these requirements due to their high CPU overhead and high-latency. Remote Direct Memory Access (RDMA) is an approach that can be designed to meet this demand. The mainstream transport protocol of RDMA over Ethernet is RoCE (RDMA over Converged Ethernet), which relies on Priority Flow Control (PFC) within the network to enable a lossless network. However, PFC is a coarse-grained protocol which can lead to problems such as congestion spreading, head-of-the-line blocking. A congestion control protocol that can alleviate these problems of PFC is needed. We propose a protocol, called P4QCN for this purpose. P4QCN is a congestion control scheme for RoCE and it is an improved Quantized Congestion Notification (QCN) design based on P4, which is a flow-level, rate-based congestion control mechanism. P4QCN extends the QCN protocol to make it compatible with IP-routed networks based on a framework of P4 and adopts a two-point algorithm architecture which is more effective than the three-point architecture used in QCN and Data Center QCN(DCQCN). Experiments show that our proposed P4QCN algorithm achieves the expected performance in terms of latency and throughput.
APA, Harvard, Vancouver, ISO, and other styles
36

Naing, Kaung Myat, Siridech Boonsang, Santhad Chuwongin, Veerayuth Kittichai, Teerawat Tongloy, Samrerng Prommongkol, Paron Dekumyoy, and Dorn Watthanakulpanich. "Automatic recognition of parasitic products in stool examination using object detection approach." PeerJ Computer Science 8 (August 17, 2022): e1065. http://dx.doi.org/10.7717/peerj-cs.1065.

Full text
Abstract:
Background Object detection is a new artificial intelligence approach to morphological recognition and labeling parasitic pathogens. Due to the lack of equipment and trained personnel, artificial intelligence innovation for searching various parasitic products in stool examination will enable patients in remote areas of undeveloped countries to access diagnostic services. Because object detection is a developing approach that has been tested for its effectiveness in detecting intestinal parasitic objects such as protozoan cysts and helminthic eggs, it is suitable for use in rural areas where many factors supporting laboratory testing are still lacking. Based on the literatures, the YOLOv4-Tiny produces faster results and uses less memory with the support of low-end GPU devices. In comparison to the YOLOv3 and YOLOv3-Tiny models, this study aimed to propose an automated object detection approach, specifically the YOLOv4-Tiny model, for automatic recognition of intestinal parasitic products in stools. Methods To identify protozoan cysts and helminthic eggs in human feces, the three YOLO approaches; YOLOv4-Tiny, YOLOv3, and YOLOv3-Tiny, were trained to recognize 34 intestinal parasitic classes using training of image dataset. Feces were processed using a modified direct smear method adapted from the simple direct smear and the modified Kato-Katz methods. The image dataset was collected from intestinal parasitic objects discovered during stool examination and the three YOLO models were trained to recognize the image datasets. Results The non-maximum suppression technique and the threshold level were used to analyze the test dataset, yielding results of 96.25% precision and 95.08% sensitivity for YOLOv4-Tiny. Additionally, the YOLOv4-Tiny model had the best AUPRC performance of the three YOLO models, with a score of 0.963. Conclusion This study, to our knowledge, was the first to detect protozoan cysts and helminthic eggs in the 34 classes of intestinal parasitic objects in human stools.
APA, Harvard, Vancouver, ISO, and other styles
37

Dmitrenko, I. A., S. A. Kirillov, N. Serra, N. V. Koldunov, V. V. Ivanov, U. Schauer, I. V. Polyakov, et al. "Heat loss from the Atlantic water layer in the St. Anna Trough (northern Kara Sea): causes and consequences." Ocean Science Discussions 11, no. 1 (February 20, 2014): 543–73. http://dx.doi.org/10.5194/osd-11-543-2014.

Full text
Abstract:
Abstract. A distinct, subsurface density front along the eastern St. Anna Trough in the northern Kara Sea is inferred from hydrographic observations in 1996 and 2008–2010. Direct velocity measurements show a persistent northward subsurface current (~ 20 cm s−1) along the St. Anna Trough eastern flank. This sheared flow, carrying the outflow from the Barents and Kara Seas to the Arctic Ocean, is also evident from shipboard observations as well as from geostrophic velocities and numerical model simulations. Although no clear evidence for the occurrence of shear instabilities could be obtained, we speculate that the enhanced vertical mixing along the St. Anna Trough eastern flank promoted by a vertical velocity shear favors the upward heat loss from the intermediate warm Atlantic water layer. The associated upward heat flux is inferred to 50–100 W m−2 using hydrographic data and model simulations. The zone of lowered sea ice thickness and concentration essentially marks the Atlantic water pathway in the St. Anna Trough and adjacent Nansen Basin continental margin from both sea-ice remote sensing observations and model simulations. In fact, the seaice shows a consistently delayed freeze-up onset during fall and a reduction in the seaice thickness during winter. This is consistent with our results on the enhanced Atlantic water heat loss along the Atlantic water pathway in the St. Anna Trough.1 1Dedicated to the memory of our colleague Klaus Hochheim who tragically lost his life in the Arctic expedition in September 2013
APA, Harvard, Vancouver, ISO, and other styles
38

Gojaturk, Narman. "Mythological Plotsasa Self-Organizing Process." International Journal of Life Sciences 9, no. 6 (September 26, 2015): 91–95. http://dx.doi.org/10.3126/ijls.v9i6.13429.

Full text
Abstract:
Elements of mythical thinking, as archaeological types, always become the center of the researchers’ attention. But there are still such problems in this area, so that are of scientific interest. There are issues among them that have a direct appeal to the philosophical thinking. The observation of striking similarities among myths of the world people is of this kind. These are plot similarities. Even the mythical plots of nations located in a remote geography quite far from one another contain elements and images similar to each other. The internal mechanism of mythical subjects’ self-organization resuscitates the folklore and gives life to it at any time. The folklore becomes active only in a person’s life. Particular people renew forms existing for centuries on the improvisation basis. Thus one can get rid of monotony and a tradition gets a new breath in the performance of each person as well. The tradition is kept, but expressed by a new rhythm and breath. And it shows that myths always live and go through centuries in a new improvisation. In Turkish provinces everybody knows the Khizir’s Spirit as a cradle song. It is the symbol of the Holiness from its original date. Khizir- a thousand year old Turk fondles his love by the language of the lullaby and bayaties.He has made a tale of his Memory and set up a great deal of saga for himself. In this case the evaluation of mythological plots always maintains its topicality as one of the important terms.
APA, Harvard, Vancouver, ISO, and other styles
39

Guo, Xing, Jianghai He, Biao Wang, and Jiaji Wu. "Prediction of Sea Surface Temperature by Combining Interdimensional and Self-Attention with Neural Networks." Remote Sensing 14, no. 19 (September 22, 2022): 4737. http://dx.doi.org/10.3390/rs14194737.

Full text
Abstract:
Sea surface temperature (SST) is one of the most important and widely used physical parameters for oceanography and meteorology. To obtain SST, in addition to direct measurement, remote sensing, and numerical models, a variety of data-driven models have been developed with a wealth of SST data being accumulated. As oceans are comprehensive and complex dynamic systems, the distribution and variation of SST are affected by various factors. To overcome this challenge and improve the prediction accuracy, a multi-variable long short-term memory (LSTM) model is proposed which takes wind speed and air pressure at sea level together with SST as inputs. Furthermore, two attention mechanisms are introduced to optimize the model. An interdimensional attention strategy, which is similar to the positional encoding matrix, is utilized to focus on important historical moments of multi-dimensional input; a self-attention strategy is adopted to smooth the data during the training process. Forty-three-year monthly mean SST and meteorological data from the fifth-generation ECMWF (European Centre for Medium-Range Weather Forecasts) reanalysis (ERA5) are collected to train and test the model for the sea areas around China. The performance of the model is evaluated in terms of different statistical parameters, namely the coefficient of determination, root mean squared error, mean absolute error and mean average percentage error, with a range of 0.9138–0.991, 0.3928–0.8789, 0.3213–0.6803, and 0.1067–0.2336, respectively. The prediction results indicate that it is superior to the LSTM-only model and models taking SST only as input, and confirm that our model is promising for oceanography and meteorology investigation.
APA, Harvard, Vancouver, ISO, and other styles
40

Silk, Joseph. "Origin and evolution of the large-scale structure of the universe." Canadian Journal of Physics 68, no. 9 (September 1, 1990): 799–807. http://dx.doi.org/10.1139/p90-117.

Full text
Abstract:
Ever since the epoch of the spontaneous breaking of grand unification symmetry between the nuclear and electromagnetic interactions, the universe has expanded under the imprint of a spectrum of density fluctuations that is generally considered to have originated in this phase transition. I will discuss various possibilities for the form of the primordial fluctuation spectrum, spanning the range of adiabatic fluctuations, isocurvature fluctuations, and cosmic strings. Growth of the seed fluctuations by gravitational instability generates the formation of large-scale structures, from the scale of galaxies to that of clusters and superclusters of galaxies. There are three areas of confrontation with observational cosmology that will be reviewed. The large-scale distribution of the galaxies, including the apparent voids, sheets and filaments, and the coherent peculiar velocity field on scales of several tens of megaparsecs, probe the primordial fluctuation spectrum on scales that are only mildly nonlinear. Even larger scales are probed by study of the anisotropy of the cosmic microwave background radiation, which provides a direct glimpse of the primordial fluctuations that existed about 106 years or so after the initial big bang singularity. Galaxy formation is the process by which the building blocks of the universe have formed, involving a complex interaction between hydrodynamical and dynamical processes in a collapsing gas cloud. Both by detection of forming galaxies in the most remote regions of the universe and by study of the fundamental morphological characteristics of galaxies, which provide a fossilized memory of their past, can one relate the origin of galaxies to the same primordial fluctuation spectrum that gave rise' to the large-scale structure of the universe.
APA, Harvard, Vancouver, ISO, and other styles
41

Ponsard, Raphael, Nicolas Janvier, Jerome Kieffer, Dominique Houzet, and Vincent Fristot. "RDMA data transfer and GPU acceleration methods for high-throughput online processing of serial crystallography images." Journal of Synchrotron Radiation 27, no. 5 (July 31, 2020): 1297–306. http://dx.doi.org/10.1107/s1600577520008140.

Full text
Abstract:
The continual evolution of photon sources and high-performance detectors drives cutting-edge experiments that can produce very high throughput data streams and generate large data volumes that are challenging to manage and store. In these cases, efficient data transfer and processing architectures that allow online image correction, data reduction or compression become fundamental. This work investigates different technical options and methods for data placement from the detector head to the processing computing infrastructure, taking into account the particularities of modern modular high-performance detectors. In order to compare realistic figures, the future ESRF beamline dedicated to macromolecular X-ray crystallography, EBSL8, is taken as an example, which will use a PSI JUNGFRAU 4M detector generating up to 16 GB of data per second, operating continuously during several minutes. Although such an experiment seems possible at the target speed with the 100 Gb s−1 network cards that are currently available, the simulations generated highlight some potential bottlenecks when using a traditional software stack. An evaluation of solutions is presented that implements remote direct memory access (RDMA) over converged ethernet techniques. A synchronization mechanism is proposed between a RDMA network interface card (RNIC) and a graphics processing unit (GPU) accelerator in charge of the online data processing. The placement of the detector images onto the GPU is made to overlap with the computation carried out, potentially hiding the transfer latencies. As a proof of concept, a detector simulator and a backend GPU receiver with a rejection and compression algorithm suitable for a synchrotron serial crystallography (SSX) experiment are developed. It is concluded that the available transfer throughput from the RNIC to the GPU accelerator is at present the major bottleneck in online processing for SSX experiments.
APA, Harvard, Vancouver, ISO, and other styles
42

Ma, Han, Shunlin Liang, Changhao Xiong, Qian Wang, Aolin Jia, and Bing Li. "Global land surface 250 m 8 d fraction of absorbed photosynthetically active radiation (FAPAR) product from 2000 to 2021." Earth System Science Data 14, no. 12 (December 7, 2022): 5333–47. http://dx.doi.org/10.5194/essd-14-5333-2022.

Full text
Abstract:
Abstract. The fraction of absorbed photosynthetically active radiation (FAPAR) is a critical land surface variable for carbon cycle modeling and ecological monitoring. Several global FAPAR products have been released and have become widely used; however, spatiotemporal inconsistency remains a large issue for the current products, and their spatial resolutions and accuracies can hardly meet the user requirements. An effective solution to improve the spatiotemporal continuity and accuracy of FAPAR products is to take better advantage of the temporal information in the satellite data using deep learning approaches. In this study, the latest version (V6) of the FAPAR product with a 250 m resolution was generated from Moderate Resolution Imaging Spectroradiometer (MODIS) surface reflectance data and other information, as part of the Global LAnd Surface Satellite (GLASS) product suite. In addition, it was aggregated to multiple coarser resolutions (up to 0.25∘ and monthly). Three existing global FAPAR products (MODIS Collection 6; GLASS V5; and PRoject for On-Board Autonomy–Vegetation, PROBA-V, V1) were used to generate the time-series training samples, which were used to develop a bidirectional long short-term memory (Bi-LSTM) model. Direct validation using high-resolution FAPAR maps from the Validation of Land European Remote sensing Instrument (VALERI) and ImagineS networks revealed that the GLASS V6 FAPAR product has a higher accuracy than PROBA-V, MODIS, and GLASS V5, with an R2 value of 0.80 and root-mean-square errors (RMSEs) of 0.10–0.11 at the 250 m, 500 m, and 3 km scales, and a higher percentage (72 %) of retrievals for meeting the accuracy requirement of 0.1. Global spatial evaluation and temporal comparison at the AmeriFlux and National Ecological Observatory Network (NEON) sites revealed that the GLASS V6 FAPAR has a greater spatiotemporal continuity and reflects the variations in the vegetation better than the GLASS V5 FAPAR. The higher quality of the GLASS V6 FAPAR is attributed to the ability of the Bi-LSTM model, which involves high-quality training samples and combines the strengths of the existing FAPAR products, as well as the temporal and spectral information from the MODIS surface reflectance data and other information. The 250 m 8 d GLASS V6 FAPAR product for 2020 is freely available at https://doi.org/10.5281/zenodo.6405564 and https://doi.org/10.5281/zenodo.6430925 (Ma, 2022a, b) as well as at the University of Maryland for 2000–2021 (http://glass.umd.edu/FAPAR/MODIS/250m, last access 1 November 2022).
APA, Harvard, Vancouver, ISO, and other styles
43

Xing, Fei, Yi Ping Yao, Zhi Wen Jiang, and Bing Wang. "Fine-Grained Parallel and Distributed Spatial Stochastic Simulation of Biological Reactions." Advanced Materials Research 345 (September 2011): 104–12. http://dx.doi.org/10.4028/www.scientific.net/amr.345.104.

Full text
Abstract:
To date, discrete event stochastic simulations of large scale biological reaction systems are extremely compute-intensive and time-consuming. Besides, it has been widely accepted that spatial factor plays a critical role in the dynamics of most biological reaction systems. The NSM (the Next Sub-Volume Method), a spatial variation of the Gillespie’s stochastic simulation algorithm (SSA), has been proposed for spatially stochastic simulation of those systems. While being able to explore high degree of parallelism in systems, NSM is inherently sequential, which still suffers from the problem of low simulation speed. Fine-grained parallel execution is an elegant way to speed up sequential simulations. Thus, based on the discrete event simulation framework JAMES II, we design and implement a PDES (Parallel Discrete Event Simulation) TW (time warp) simulator to enable the fine-grained parallel execution of spatial stochastic simulations of biological reaction systems using the ANSM (the Abstract NSM), a parallel variation of the NSM. The simulation results of classical Lotka-Volterra biological reaction system show that our time warp simulator obtains remarkable parallel speed-up against sequential execution of the NSM.I.IntroductionThe goal of Systems biology is to obtain system-level investigations of the structure and behavior of biological reaction systems by integrating biology with system theory, mathematics and computer science [1][3], since the isolated knowledge of parts can not explain the dynamics of a whole system. As the complement of “wet-lab” experiments, stochastic simulation, being called the “dry-computational” experiment, plays a more and more important role in computing systems biology [2]. Among many methods explored in systems biology, discrete event stochastic simulation is of greatly importance [4][5][6], since a great number of researches have present that stochasticity or “noise” have a crucial effect on the dynamics of small population biological reaction systems [4][7]. Furthermore, recent research shows that the stochasticity is not only important in biological reaction systems with small population but also in some moderate/large population systems [7].To date, Gillespie’s SSA [8] is widely considered to be the most accurate way to capture the dynamics of biological reaction systems instead of traditional mathematical method [5][9]. However, SSA-based stochastic simulation is confronted with two main challenges: Firstly, this type of simulation is extremely time-consuming, since when the types of species and the number of reactions in the biological system are large, SSA requires a huge amount of steps to sample these reactions; Secondly, the assumption that the systems are spatially homogeneous or well-stirred is hardly met in most real biological systems and spatial factors play a key role in the behaviors of most real biological systems [19][20][21][22][23][24]. The next sub-volume method (NSM) [18], presents us an elegant way to access the special problem via domain partition. To our disappointment, sequential stochastic simulation with the NSM is still very time-consuming, and additionally introduced diffusion among neighbor sub-volumes makes things worse. Whereas, the NSM explores a very high degree of parallelism among sub-volumes, and parallelization has been widely accepted as the most meaningful way to tackle the performance bottleneck of sequential simulations [26][27]. Thus, adapting parallel discrete event simulation (PDES) techniques to discrete event stochastic simulation would be particularly promising. Although there are a few attempts have been conducted [29][30][31], research in this filed is still in its infancy and many issues are in need of further discussion. The next section of the paper presents the background and related work in this domain. In section III, we give the details of design and implementation of model interfaces of LP paradigm and the time warp simulator based on the discrete event simulation framework JAMES II; the benchmark model and experiment results are shown in Section IV; in the last section, we conclude the paper with some future work.II. Background and Related WorkA. Parallel Discrete Event Simulation (PDES)The notion Logical Process (LP) is introduced to PDES as the abstract of the physical process [26], where a system consisting of many physical processes is usually modeled by a set of LP. LP is regarded as the smallest unit that can be executed in PDES and each LP holds a sub-partition of the whole system’s state variables as its private ones. When a LP processes an event, it can only modify the state variables of its own. If one LP needs to modify one of its neighbors’ state variables, it has to schedule an event to the target neighbor. That is to say event message exchanging is the only way that LPs interact with each other. Because of the data dependences or interactions among LPs, synchronization protocols have to be introduced to PDES to guarantee the so-called local causality constraint (LCC) [26]. By now, there are a larger number of synchronization algorithms have been proposed, e.g. the null-message [26], the time warp (TW) [32], breath time warp (BTW) [33] and etc. According to whether can events of LPs be processed optimistically, they are generally divided into two types: conservative algorithms and optimistic algorithms. However, Dematté and Mazza have theoretically pointed out the disadvantages of pure conservative parallel simulation for biochemical reaction systems [31]. B. NSM and ANSM The NSM is a spatial variation of Gillespie’ SSA, which integrates the direct method (DM) [8] with the next reaction method (NRM) [25]. The NSM presents us a pretty good way to tackle the aspect of space in biological systems by partitioning a spatially inhomogeneous system into many much more smaller “homogeneous” ones, which can be simulated by SSA separately. However, the NSM is inherently combined with the sequential semantics, and all sub-volumes share one common data structure for events or messages. Thus, directly parallelization of the NSM may be confronted with the so-called boundary problem and high costs of synchronously accessing the common data structure [29]. In order to obtain higher efficiency of parallel simulation, parallelization of NSM has to firstly free the NSM from the sequential semantics and secondly partition the shared data structure into many “parallel” ones. One of these is the abstract next sub-volume method (ANSM) [30]. In the ANSM, each sub-volume is modeled by a logical process (LP) based on the LP paradigm of PDES, where each LP held its own event queue and state variables (see Fig. 1). In addition, the so-called retraction mechanism was introduced in the ANSM too (see algorithm 1). Besides, based on the ANSM, Wang etc. [30] have experimentally tested the performance of several PDES algorithms in the platform called YH-SUPE [27]. However, their platform is designed for general simulation applications, thus it would sacrifice some performance for being not able to take into account the characteristics of biological reaction systems. Using the similar ideas of the ANSM, Dematté and Mazza have designed and realized an optimistic simulator. However, they processed events in time-stepped manner, which would lose a specific degree of precisions compared with the discrete event manner, and it is very hard to transfer a time-stepped simulation to a discrete event one. In addition, Jeschke etc.[29] have designed and implemented a dynamic time-window simulator to execution the NSM in parallel on the grid computing environment, however, they paid main attention on the analysis of communication costs and determining a better size of the time-window.Fig. 1: the variations from SSA to NSM and from NSM to ANSMC. JAMES II JAMES II is an open source discrete event simulation experiment framework developed by the University of Rostock in Germany. It focuses on high flexibility and scalability [11][13]. Based on the plug-in scheme [12], each function of JAMES II is defined as a specific plug-in type, and all plug-in types and plug-ins are declared in XML-files [13]. Combined with the factory method pattern JAMES II innovatively split up the model and simulator, which makes JAMES II is very flexible to add and reuse both of models and simulators. In addition, JAMES II supports various types of modelling formalisms, e.g. cellular automata, discrete event system specification (DEVS), SpacePi, StochasticPi and etc.[14]. Besides, a well-defined simulator selection mechanism is designed and developed in JAMES II, which can not only automatically choose the proper simulators according to the modeling formalism but also pick out a specific simulator from a serious of simulators supporting the same modeling formalism according to the user settings [15].III. The Model Interface and SimulatorAs we have mentioned in section II (part C), model and simulator are split up into two separate parts. Thus, in this section, we introduce the designation and implementation of model interface of LP paradigm and more importantly the time warp simulator.A. The Mod Interface of LP ParadigmJAMES II provides abstract model interfaces for different modeling formalism, based on which Wang etc. have designed and implemented model interface of LP paradigm[16]. However, this interface is not scalable well for parallel and distributed simulation of larger scale systems. In our implementation, we accommodate the interface to the situation of parallel and distributed situations. Firstly, the neighbor LP’s reference is replaced by its name in LP’s neighbor queue, because it is improper even dangerous that a local LP hold the references of other LPs in remote memory space. In addition, (pseudo-)random number plays a crucial role to obtain valid and meaningful results in stochastic simulations. However, it is still a very challenge work to find a good random number generator (RNG) [34]. Thus, in order to focus on our problems, we introduce one of the uniform RNGs of JAMES II to this model interface, where each LP holds a private RNG so that random number streams of different LPs can be independent stochastically. B. The Time Warp SimulatorBased on the simulator interface provided by JAMES II, we design and implement the time warp simulator, which contains the (master-)simulator, (LP-)simulator. The simulator works strictly as master/worker(s) paradigm for fine-grained parallel and distributed stochastic simulations. Communication costs are crucial to the performance of a fine-grained parallel and distributed simulation. Based on the Java remote method invocation (RMI) mechanism, P2P (peer-to-peer) communication is implemented among all (master-and LP-)simulators, where a simulator holds all the proxies of targeted ones that work on remote workers. One of the advantages of this communication approach is that PDES codes can be transferred to various hardwire environment, such as Clusters, Grids and distributed computing environment, with only a little modification; The other is that RMI mechanism is easy to realized and independent to any other non-Java libraries. Since the straggler event problem, states have to be saved to rollback events that are pre-processed optimistically. Each time being modified, the state is cloned to a queue by Java clone mechanism. Problem of this copy state saving approach is that it would cause loads of memory space. However, the problem can be made up by a condign GVT calculating mechanism. GVT reduction scheme also has a significant impact on the performance of parallel simulators, since it marks the highest time boundary of events that can be committed so that memories of fossils (processed events and states) less than GVT can be reallocated. GVT calculating is a very knotty for the notorious simultaneous reporting problem and transient messages problem. According to our problem, another GVT algorithm, called Twice Notification (TN-GVT) (see algorithm 2), is contributed to this already rich repository instead of implementing one of GVT algorithms in reference [26] and [28].This algorithm looks like the synchronous algorithm described in reference [26] (pp. 114), however, they are essentially different from each other. This algorithm has never stopped the simulators from processing events when GVT reduction, while algorithm in reference [26] blocks all simulators for GVT calculating. As for the transient message problem, it can be neglect in our implementation, because RMI based remote communication approach is synchronized, that means a simulator will not go on its processing until the remote the massage get to its destination. And because of this, the high-costs message acknowledgement, prevalent over many classical asynchronous GVT algorithms, is not needed anymore too, which should be constructive to the whole performance of the time warp simulator.IV. Benchmark Model and Experiment ResultsA. The Lotka-Volterra Predator-prey SystemIn our experiment, the spatial version of Lotka-Volterra predator-prey system is introduced as the benchmark model (see Fig. 2). We choose the system for two considerations: 1) this system is a classical experimental model that has been used in many related researches [8][30][31], so it is credible and the simulation results are comparable; 2) it is simple but helpful enough to test the issues we are interested in. The space of predator-prey System is partitioned into a2D NXNgrid, whereNdenotes the edge size of the grid. Initially the population of the Grass, Preys and Predators are set to 1000 in each single sub-volume (LP). In Fig. 2,r1,r2,r3stand for the reaction constants of the reaction 1, 2 and 3 respectively. We usedGrass,dPreyanddPredatorto stand for the diffusion rate of Grass, Prey and Predator separately. Being similar to reference [8], we also take the assumption that the population of the grass remains stable, and thusdGrassis set to zero.R1:Grass + Prey ->2Prey(1)R2:Predator +Prey -> 2Predator(2)R3:Predator -> NULL(3)r1=0.01; r2=0.01; r3=10(4)dGrass=0.0;dPrey=2.5;dPredato=5.0(5)Fig. 2: predator-prey systemB. Experiment ResultsThe simulation runs have been executed on a Linux Cluster with 40 computing nodes. Each computing node is equipped with two 64bit 2.53 GHz Intel Xeon QuadCore Processors with 24GB RAM, and nodes are interconnected with Gigabit Ethernet connection. The operating system is Kylin Server 3.5, with kernel 2.6.18. Experiments have been conducted on the benchmark model of different size of mode to investigate the execution time and speedup of the time warp simulator. As shown in Fig. 3, the execution time of simulation on single processor with 8 cores is compared. The result shows that it will take more wall clock time to simulate much larger scale systems for the same simulation time. This testifies the fact that larger scale systems will leads to more events in the same time interval. More importantly, the blue line shows that the sequential simulation performance declines very fast when the mode scale becomes large. The bottleneck of sequential simulator is due to the costs of accessing a long event queue to choose the next events. Besides, from the comparison between group 1 and group 2 in this experiment, we could also conclude that high diffusion rate increased the simulation time greatly both in sequential and parallel simulations. This is because LP paradigm has to split diffusion into two processes (diffusion (in) and diffusion (out) event) for two interactive LPs involved in diffusion and high diffusion rate will lead to high proportional of diffusion to reaction. In the second step shown in Fig. 4, the relationship between the speedups from time warp of two different model sizes and the number of work cores involved are demonstrated. The speedup is calculated against the sequential execution of the spatial reaction-diffusion systems model with the same model size and parameters using NSM.Fig. 4 shows the comparison of speedup of time warp on a64X64grid and a100X100grid. In the case of a64X64grid, under the condition that only one node is used, the lowest speedup (a little bigger than 1) is achieved when two cores involved, and the highest speedup (about 6) is achieved when 8 cores involved. The influence of the number of cores used in parallel simulation is investigated. In most cases, large number of cores could bring in considerable improvements in the performance of parallel simulation. Also, compared with the two results in Fig. 4, the simulation of larger model achieves better speedup. Combined with time tests (Fig. 3), we find that sequential simulator’s performance declines sharply when the model scale becomes very large, which makes the time warp simulator get better speed-up correspondingly.Fig. 3: Execution time (wall clock time) of Seq. and time warp with respect to different model sizes (N=32, 64, 100, and 128) and model parameters based on single computing node with 8 cores. Results of the test are grouped by the diffusion rates (Group 1: Sequential 1 and Time Warp 1. dPrey=2.5, dPredator=5.0; Group 2: dPrey=0.25, dPredator=0.5, Sequential 2 and Time Warp 2).Fig. 4: Speedup of time warp with respect to the number of work cores and the model size (N=64 and 100). Work cores are chose from one computing node. Diffusion rates are dPrey=2.5, dPredator=5.0 and dGrass=0.0.V. Conclusion and Future WorkIn this paper, a time warp simulator based on the discrete event simulation framework JAMES II is designed and implemented for fine-grained parallel and distributed discrete event spatial stochastic simulation of biological reaction systems. Several challenges have been overcome, such as state saving, roll back and especially GVT reduction in parallel execution of simulations. The Lotka-Volterra Predator-Prey system is chosen as the benchmark model to test the performance of our time warp simulator and the best experiment results show that it can obtain about 6 times of speed-up against the sequential simulation. The domain this paper concerns with is in the infancy, many interesting issues are worthy of further investigated, e.g. there are many excellent PDES optimistic synchronization algorithms (e.g. the BTW) as well. Next step, we would like to fill some of them into JAMES II. In addition, Gillespie approximation methods (tau-leap[10] etc.) sacrifice some degree of precision for higher simulation speed, but still could not address the aspect of space of biological reaction systems. The combination of spatial element and approximation methods would be very interesting and promising; however, the parallel execution of tau-leap methods should have to overcome many obstacles on the road ahead.AcknowledgmentThis work is supported by the National Natural Science Foundation of China (NSF) Grant (No.60773019) and the Ph.D. Programs Foundation of Ministry of Education of China (No. 200899980004). The authors would like to show their great gratitude to Dr. Jan Himmelspach and Dr. Roland Ewald at the University of Rostock, Germany for their invaluable advice and kindly help with JAMES II.ReferencesH. Kitano, "Computational systems biology." Nature, vol. 420, no. 6912, pp. 206-210, November 2002.H. Kitano, "Systems biology: a brief overview." Science (New York, N.Y.), vol. 295, no. 5560, pp. 1662-1664, March 2002.A. Aderem, "Systems biology: Its practice and challenges," Cell, vol. 121, no. 4, pp. 511-513, May 2005. [Online]. Available: http://dx.doi.org/10.1016/j.cell.2005.04.020.H. de Jong, "Modeling and simulation of genetic regulatory systems: A literature review," Journal of Computational Biology, vol. 9, no. 1, pp. 67-103, January 2002.C. W. Gardiner, Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences (Springer Series in Synergetics), 3rd ed. Springer, April 2004.D. T. Gillespie, "Simulation methods in systems biology," in Formal Methods for Computational Systems Biology, ser. Lecture Notes in Computer Science, M. Bernardo, P. Degano, and G. Zavattaro, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5016, ch. 5, pp. 125-167.Y. Tao, Y. Jia, and G. T. Dewey, "Stochastic fluctuations in gene expression far from equilibrium: Omega expansion and linear noise approximation," The Journal of Chemical Physics, vol. 122, no. 12, 2005.D. T. Gillespie, "Exact stochastic simulation of coupled chemical reactions," Journal of Physical Chemistry, vol. 81, no. 25, pp. 2340-2361, December 1977.D. T. Gillespie, "Stochastic simulation of chemical kinetics," Annual Review of Physical Chemistry, vol. 58, no. 1, pp. 35-55, 2007.D. T. Gillespie, "Approximate accelerated stochastic simulation of chemically reacting systems," The Journal of Chemical Physics, vol. 115, no. 4, pp. 1716-1733, 2001.J. Himmelspach, R. Ewald, and A. M. Uhrmacher, "A flexible and scalable experimentation layer," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 827-835.J. Himmelspach and A. M. Uhrmacher, "Plug'n simulate," in 40th Annual Simulation Symposium (ANSS'07). Washington, DC, USA: IEEE, March 2007, pp. 137-143.R. Ewald, J. Himmelspach, M. Jeschke, S. Leye, and A. M. Uhrmacher, "Flexible experimentation in the modeling and simulation framework james ii-implications for computational systems biology," Brief Bioinform, vol. 11, no. 3, pp. bbp067-300, January 2010.A. Uhrmacher, J. Himmelspach, M. Jeschke, M. John, S. Leye, C. Maus, M. Röhl, and R. Ewald, "One modelling formalism & simulator is not enough! a perspective for computational biology based on james ii," in Formal Methods in Systems Biology, ser. Lecture Notes in Computer Science, J. Fisher, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5054, ch. 9, pp. 123-138. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-68413-8_9.R. Ewald, J. Himmelspach, and A. M. Uhrmacher, "An algorithm selection approach for simulation systems," pads, vol. 0, pp. 91-98, 2008.Bing Wang, Jan Himmelspach, Roland Ewald, Yiping Yao, and Adelinde M Uhrmacher. Experimental analysis of logical process simulation algorithms in james ii[C]// In M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, editors, Proceedings of the Winter Simulation Conference, IEEE Computer Science, 2009. 1167-1179.Ewald, J. Rössel, J. Himmelspach, and A. M. Uhrmacher, "A plug-in-based architecture for random number generation in simulation systems," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 836-844.J. Elf and M. Ehrenberg, "Spontaneous separation of bi-stable biochemical systems into spatial domains of opposite phases." Systems biology, vol. 1, no. 2, pp. 230-236, December 2004.K. Takahashi, S. Arjunan, and M. Tomita, "Space in systems biology of signaling pathways? Towards intracellular molecular crowding in silico," FEBS Letters, vol. 579, no. 8, pp. 1783-1788, March 2005.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.D. Ridgway, G. Broderick, and M. Ellison, "Accommodating space, time and randomness in network simulation," Current Opinion in Biotechnology, vol. 17, no. 5, pp. 493-498, October 2006.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.W. G. Wilson, A. M. Deroos, and E. Mccauley, "Spatial instabilities within the diffusive lotka-volterra system: Individual-based simulation results," Theoretical Population Biology, vol. 43, no. 1, pp. 91-127, February 1993.K. Kruse and J. Elf. Kinetics in spatially extended systems. In Z. Szallasi, J. Stelling, and V. Periwal, editors, System Modeling in Cellular Biology. From Concepts to Nuts and Bolts, pages 177–198. MIT Press, Cambridge, MA, 2006.M. A. Gibson and J. Bruck, "Efficient exact stochastic simulation of chemical systems with many species and many channels," The Journal of Physical Chemistry A, vol. 104, no. 9, pp. 1876-1889, March 2000.R. M. Fujimoto, Parallel and Distributed Simulation Systems (Wiley Series on Parallel and Distributed Computing). Wiley-Interscience, January 2000.Y. Yao and Y. Zhang, “Solution for analytic simulation based on parallel processing,” Journal of System Simulation, vol. 20, No.24, pp. 6617–6621, 2008.G. Chen and B. K. Szymanski, "Dsim: scaling time warp to 1,033 processors," in WSC '05: Proceedings of the 37th conference on Winter simulation. Winter Simulation Conference, 2005, pp. 346-355.M. Jeschke, A. Park, R. Ewald, R. Fujimoto, and A. M. Uhrmacher, "Parallel and distributed spatial simulation of chemical reactions," in 2008 22nd Workshop on Principles of Advanced and Distributed Simulation. Washington, DC, USA: IEEE, June 2008, pp. 51-59.B. Wang, Y. Yao, Y. Zhao, B. Hou, and S. Peng, "Experimental analysis of optimistic synchronization algorithms for parallel simulation of reaction-diffusion systems," High Performance Computational Systems Biology, International Workshop on, vol. 0, pp. 91-100, October 2009.L. Dematté and T. Mazza, "On parallel stochastic simulation of diffusive systems," in Computational Methods in Systems Biology, M. Heiner and A. M. Uhrmacher, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5307, ch. 16, pp. 191-210.D. R. Jefferson, "Virtual time," ACM Trans. Program. Lang. Syst., vol. 7, no. 3, pp. 404-425, July 1985.J. S. Steinman, "Breathing time warp," SIGSIM Simul. Dig., vol. 23, no. 1, pp. 109-118, July 1993. [Online]. Available: http://dx.doi.org/10.1145/174134.158473 S. K. Park and K. W. Miller, "Random number generators: good ones are hard to find," Commun. ACM, vol. 31, no. 10, pp. 1192-1201, October 1988.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Hanhua, Jie Yuan, Hai Jin, Yonghui Wang, Sijie Wu, and Zhihao Jiang. "RGraph: Asynchronous graph processing based on asymmetry of remote direct memory access." Software: Practice and Experience, April 26, 2021. http://dx.doi.org/10.1002/spe.2979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Argyropoulos, Georgios PD, Clare Loane, Adriana Roca-Fernandez, Carmen Lage-Martinez, Oana Gurau, Sarosh R. Irani, and Christopher R. Butler. "Network-wide abnormalities explain memory variability in hippocampal amnesia." eLife 8 (July 8, 2019). http://dx.doi.org/10.7554/elife.46156.

Full text
Abstract:
Patients with hippocampal amnesia play a central role in memory neuroscience but the neural underpinnings of amnesia are hotly debated. We hypothesized that focal hippocampal damage is associated with changes across the extended hippocampal system and that these, rather than hippocampal atrophy per se, would explain variability in memory between patients. We assessed this hypothesis in a uniquely large cohort of patients (n = 38) after autoimmune limbic encephalitis, a syndrome associated with focal structural hippocampal pathology. These patients showed impaired recall, recognition and maintenance of new information, and remote autobiographical amnesia. Besides hippocampal atrophy, we observed correlatively reduced thalamic and entorhinal cortical volume, resting-state inter-hippocampal connectivity and activity in posteromedial cortex. Associations of hippocampal volume with recall, recognition, and remote memory were fully mediated by wider network abnormalities, and were only direct in forgetting. Network abnormalities may explain the variability across studies of amnesia and speak to debates in memory neuroscience.
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Yuefeng, Anusha Mohan, S. Lauren McLeod, Alison M. Luckey, John Hart, and Sven Vanneste. "Polarity-specific high-definition transcranial direct current stimulation of the anterior and posterior default mode network improves remote memory retrieval." Brain Stimulation, June 2021. http://dx.doi.org/10.1016/j.brs.2021.06.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Rabiller, Gratianne, Yasuo Nishijima, Jiwei He, Xavier Leinekugel, Bruno Bontempi, and Jialing Liu. "Abstract WP279: Disruption of Cortico-hippocampal Functional Connectivity Underlies the Memory Impairment Following Experimental Stroke." Stroke 47, suppl_1 (February 2016). http://dx.doi.org/10.1161/str.47.suppl_1.wp279.

Full text
Abstract:
Introduction: The cognitive consequences and the underlying mechanisms leading to cognitive impairments after cerebrovascular occlusive diseases are still unclear. Previously we have shown that distal middle cerebral artery occlusion (dMCAO) led to hippocampal hypofunction and spatial memory impairment in the absence of direct hippocampal injury. Hypothesis: In light of the importance of interactions between the cortex and hippocampus during memory processing, we hypothesize that cortical stroke disrupts afferent excitatory input from the lesioned cortical area to other remote brain regions such as the hippocampus, which can be mimicked by pharmacological inactivation of the cortex. Methods: Adult male rats were subjected to either dMCAO or cortical inactivation with AMPA receptor antagonist CNQX, compared to controls by sham-operation or injection with artificial CSF, respectively. Hippocampal function was determined by the social transmission of food preference (STFP) test, activity mapping by Fos imaging and in vivo electrophysiology by multi-channel extracellular recording. Results: dMCAO and cortical inactivation both induced impaired memory performance as measured in the STFP task and reduced hippocampal activation. The memory impairment was not attributed to change in olfaction by stroke or inactivation, since it was not detected in either experimental group. An increase in the bursts of sharp-wave associated ripples (SPW-Rs) was detected during reperfusion after stroke and after CNQX injection. Further, cortical inactivation also induced a shift of the theta phase, suggesting an alteration in the dynamics of hippocampal-cortical interactions. Conclusions: Given the crucial role of hippocampal theta and SPW-Rs in memory function, our results suggest that cortical stroke leads to memory impairment by disrupting the functional connectivity between the cortex and hippocampus, albeit without causing direct injury to the latter.
APA, Harvard, Vancouver, ISO, and other styles
48

Leong, Kin-Wai, Zhilong Li, and Yunqu Leon Liu. "Reliable multicast using remote direct memory access (RDMA) over a passive optical cross-connect fabric enhanced with wavelength division multiplexing (WDM)." APSIPA Transactions on Signal and Information Processing 8 (2019). http://dx.doi.org/10.1017/atsip.2019.17.

Full text
Abstract:
It has been well studied that reliable multicast enables consistency protocols, including Byzantine Fault Tolerant protocols, for distributed systems. However, no transport-layer reliable multicast is used today due to limitations with existing switch fabrics and transport-layer protocols. In this paper, we introduce a layer-4 (L4) transport based on remote direct memory access (RDMA) datagram to achieve reliable multicast over a shared optical medium. By connecting a cluster of networking nodes using a passive optical cross-connect fabric enhanced with wavelength division multiplexing, all messages are broadcast to all nodes. This mechanism enables consistency in a distributed system to be maintained at a low latency cost. By further utilizing RDMA datagram as the L4 protocol, we have achieved a low-enough message loss-ratio (better than one in 68 billion) to make a simple Negative Acknowledge (NACK)-based L4 multicast practical to deploy. To our knowledge, it is the first multicast architecture able to demonstrate such low message loss-ratio. Furthermore, with this reliable multicast transport, end-to-end latencies of eight microseconds or less (< 8us) have been routinely achieved using an enhanced software RDMA implementation on a variety of commodity 10G Ethernet network adapters.
APA, Harvard, Vancouver, ISO, and other styles
49

Sawant, Bhushan. "Latency and Throughput Optimization In Modern Networks." International Journal of Advanced Research in Science, Communication and Technology, May 30, 2021, 585–92. http://dx.doi.org/10.48175/ijarsct-1298.

Full text
Abstract:
The paper briefly describes about the subject which is Latency and Throughput Optimization in Modern Networks. Modern applications are highly sensitive to communication delays and throughput. This paper surveys major attempts on reducing latency and increasing the throughput. These methods are surveyed on different networks and surrounding’s such as wired networks, wireless networks, application layer transport control, Remote Direct Memory Access, and machine learning based transport control. On one hand every user likes to send and receive their data as quickly as possible. On the other hand the network infrastructure that connects users has limited capacities and these are usually shared among users. Thus Latency and Throughput Optimization is important in Modern networking.
APA, Harvard, Vancouver, ISO, and other styles
50

Hinds, Bruce J., Takayuki Yamanaka, and Shunri Oda. "Charge Storage Mechanism in Nano-Crystalline Si Based Single-Electron Memories." MRS Proceedings 638 (2000). http://dx.doi.org/10.1557/proc-638-f2.2.1.

Full text
Abstract:
AbstractA memory device sensitive to a single electron stored in a nanocrystalline Si dot has been synthesized allowing for the study of charge retention lifetime. A 50 nm by 20 nm transistor channel is synthesized by E-beam lithography followed by reactive ion etching of thin (20nm) Silicon-on-Insulator (SOI) channel. This small area of the narrow channel allows for the isolation of a single nano-crystalline Si dot and elimination of channel percolation paths around the screening charge. Remote Plasma Enhanced CVD is used to form 6±2nm diameter nc-Si dots in the gas phase from a pulsed SiH4 source. Electrons stored in a dot results in an observed discrete threshold shift of 90 mV. Analysis of lifetime as a function of applied potential and temperature show the dot to be an acceptor site with nearly Poisson lifetime distributions. An observed 1/T2 dependence of lifetime is consistent with a direct tunneling process, thus interface states are not the dominant mechanism for electron storage in this device structure. Median lifetimes are modeled by a direct tunneling process, which is influenced by gate bias and dot size.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography