Journal articles on the topic 'Scalable memory bank'

To see the other types of publications on this topic, follow the link: Scalable memory bank.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 journal articles for your research on the topic 'Scalable memory bank.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Torres, Igor Cavalcante, Daniel M. Farias, Andre L. L. Aquino, and Chigueru Tiba. "Voltage Regulation For Residential Prosumers Using a Set of Scalable Power Storage." Energies 14, no. 11 (June 4, 2021): 3288. http://dx.doi.org/10.3390/en14113288.

Full text
Abstract:
Among the electrical problems observed from the solar irradiation variability, the electrical energy quality and the energetic dispatch guarantee stand out. The great revolution in batteries technologies has fostered its usage with the installation of photovoltaic system (PVS). This work presents a proposition for voltage regulation for residential prosumers using a set of scalable power batteries in passive mode, operating as a consumer device. The mitigation strategy makes decisions acting directly on the demand, for a storage bank, and the power of the storage element is selected in consequence of the results obtained from the power flow calculation step combined with the prediction of the solar radiation calculated by a recurrent neural network Long Short-Term Memory (LSTM) type. The results from the solar radiation predictions are used as subsidies to estimate, the state of the power grid, solving the power flow and evidencing the values of the electrical voltages 1-min enabling the entry of the storage device. In this stage, the OpenDSS (Open distribution system simulator) software is used, to perform the complete modeling of the power grid where the study will be developed, as well as simulating the effect of the overvoltages mitigation system. The clear sky day stored 9111 Wh/day of electricity to mitigate overvoltages at the supply point; when compared to other days, the clear sky day needed to store less electricity. On days of high variability, the energy stored to regulate overvoltages was 84% more compared to a clear day. In order to maintain a constant state of charge (SoC), it is necessary that the capacity of the battery bank be increased to meet the condition of maximum accumulated energy. Regarding the total loading of the storage system, the days of low variability consumed approximately 12% of the available capacity of the battery, considering the SoC of 70% of the capacity of each power level.
APA, Harvard, Vancouver, ISO, and other styles
2

Wan, Hui, Liang Chen, and Minghua Deng. "scNAME: neighborhood contrastive clustering with ancillary mask estimation for scRNA-seq data." Bioinformatics 38, no. 6 (January 6, 2022): 1575–83. http://dx.doi.org/10.1093/bioinformatics/btac011.

Full text
Abstract:
Abstract Motivation The rapid development of single-cell RNA sequencing (scRNA-seq) makes it possible to study the heterogeneity of individual cell characteristics. Cell clustering is a vital procedure in scRNA-seq analysis, providing insight into complex biological phenomena. However, the noisy, high-dimensional and large-scale nature of scRNA-seq data introduces challenges in clustering analysis. Up to now, many deep learning-based methods have emerged to learn underlying feature representations while clustering. However, these methods are inefficient when it comes to rare cell type identification and barely able to fully utilize gene dependencies or cell similarity integrally. As a result, they cannot detect a clear cell type structure which is required for clustering accuracy as well as downstream analysis. Results Here, we propose a novel scRNA-seq clustering algorithm called scNAME which incorporates a mask estimation task for gene pertinence mining and a neighborhood contrastive learning framework for cell intrinsic structure exploitation. The learned pattern through mask estimation helps reveal uncorrupted data structure and denoise the original single-cell data. In addition, the randomly created augmented data introduced in contrastive learning not only helps improve robustness of clustering, but also increases sample size in each cluster for better data capacity. Beyond this, we also introduce a neighborhood contrastive paradigm with an offline memory bank, global in scope, which can inspire discriminative feature representation and achieve intra-cluster compactness, yet inter-cluster separation. The combination of mask estimation task, neighborhood contrastive learning and global memory bank designed in scNAME is conductive to rare cell type detection. The experimental results of both simulations and real data confirm that our method is accurate, robust and scalable. We also implement biological analysis, including marker gene identification, gene ontology and pathway enrichment analysis, to validate the biological significance of our method. To the best of our knowledge, we are among the first to introduce a gene relationship exploration strategy, as well as a global cellular similarity repository, in the single-cell field. Availability and implementation An implementation of scNAME is available from https://github.com/aster-ww/scNAME. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
3

Tzannou, Ifigeneia, Kathryn S. Leung, Caridad Martinez, Swati Naik, Stephen Gottschalk, Adrian P. Gee, Bambi Grilley, et al. "Safety and Preliminary Efficacy of "Ready to Administer" Cytomegalovirus (CMV)-Specific T Cells for the Treatment of Patients with Refractory CMV Infection." Blood 128, no. 22 (December 2, 2016): 388. http://dx.doi.org/10.1182/blood.v128.22.388.388.

Full text
Abstract:
Abstract Despite advances in antiviral drugs, Cytomegalovirus (CMV) infections remain a significant cause of morbidity and mortality in immunocompromised individuals. We have recently demonstrated in hematopoietic stem cell transplant (HSCT) recipients that adoptively-transferred virus-specific T cells, generated from healthy 3rd party donors and administered as an "ready to administer" product, can be curative, even in patients with drug-refractory CMV infections. However, broader implementation has been hindered by the postulated need for extensive panels of T cell lines representing a diverse HLA profile, as well as the complexities of large scale manufacturing for widespread clinical application. To address these potential issues, we have developed a decision tool that identified a short list of donors who provide HLA coverage for >90% of the stem cell transplant population. Furthermore, to generate banks of CMV-specific T cells from these donors, we have created a simple, robust, and linearly scalable manufacturing process. To determine whether these advances would enable the widespread application of "ready to administer" T cells, we generated CMV cell banks (Viralym-C™) from 9 healthy donors selected by our decision tool, and initiated a fixed-dose (2x107 cells/m2) Phase I clinical trial for the treatment of drug-refractory CMV infections in pediatric and adult HSCT recipients. To generate the Viralym-C™ banks, we stimulated donor peripheral blood mononuclear cells (PBMCs) with overlapping peptide libraries spanning the immunodominant CMV antigens pp65 and IE1. Cells were subsequently expanded in a G-Rex device, resulting in a mean fold expansion of 103±12. The lines were polyclonal, comprising both CD4+ (21.3±6.7%) and CD8+ (74.8±6.9%) T cells, and expressed central CD45RO+/CD62L+ (58.5±4.2%) and effector memory markers CD45RO+/CD62L- (35.3±12.2%). Furthermore, the lines generated were specific for the target antigens (IE1: 419±100; pp65 1070±31 SFC/2x105, n=9). To date, we have screened 12 patients for study participation, and from our bank of just 9 lines we have successfully identified a suitable line for all patients within 24 hours. Of these, 6 patients have been infused; 5 received a single infusion and 1 patient required 2 infusions for sustained benefit. There were no immediate infusion-related toxicities; and despite the HLA disparity between the Viralym-C lines and the patients infused, there were no cases of de novo or recurrent graft versus host disease (GvHD). One patient developed a transient fever a few hours post-infusion, which spontaneously resolved. Based on viral load, measured by quantitative PCR, or symptom resolution (in patients with disease), Viralym-C™ cells controlled active infections in all 5 evaluable patients; 4 patients had complete responses, and 1 patient had a partial response within 4 weeks of cell infusion. One patient with CMV retinitis had complete resolution of symptoms following Viralym-C™ infusion. In conclusion, our results demonstrate the feasibility, preliminary safety and efficacy of "ready to administer" Viralym-C™ cells that have been generated from a small panel of healthy, eligible CMV seropositive donors identified by our decision support tool. These data suggest that cost-effective, broadly applicable T cell anti-viral therapy may be feasible for patients following HSCT and potentially other conditions. Disclosures Tzannou: ViraCyte LLC: Consultancy. Leen:ViraCyte LLC: Equity Ownership, Patents & Royalties. Kakarla:ViraCyte LLC: Employment.
APA, Harvard, Vancouver, ISO, and other styles
4

Murfi, Hendri. "A scalable eigenspace-based fuzzy c-means for topic detection." Data Technologies and Applications 55, no. 4 (March 23, 2021): 527–41. http://dx.doi.org/10.1108/dta-11-2020-0262.

Full text
Abstract:
PurposeThe aim of this research is to develop an eigenspace-based fuzzy c-means method for scalable topic detection.Design/methodology/approachThe eigenspace-based fuzzy c-means (EFCM) combines representation learning and clustering. The textual data are transformed into a lower-dimensional eigenspace using truncated singular value decomposition. Fuzzy c-means is performed on the eigenspace to identify the centroids of each cluster. The topics are provided by transforming back the centroids into the nonnegative subspace of the original space. In this paper, we extend the EFCM method for scalability by using the two approaches, i.e. single-pass and online. We call the developed topic detection methods as oEFCM and spEFCM.FindingsOur simulation shows that both oEFCM and spEFCM methods provide faster running times than EFCM for data sets that do not fit in memory. However, there is a decrease in the average coherence score. For both data sets that fit and do not fit into memory, the oEFCM method provides a tradeoff between running time and coherence score, which is better than spEFCM.Originality/valueThis research produces a scalable topic detection method. Besides this scalability capability, the developed method also provides a faster running time for the data set that fits in memory.
APA, Harvard, Vancouver, ISO, and other styles
5

RIDDOCH, DAVID, STEVE POPE, DEREK ROBERTS, GLENFORD MAPP, DAVID CLARKE, DAVID INGRAM, KIERAN MANSLEY, and ANDY HOPPER. "TRIPWIRE: A SYNCHRONISATION PRIMITIVE FOR VIRTUAL MEMORY MAPPED COMMUNICATION." Journal of Interconnection Networks 02, no. 03 (September 2001): 345–64. http://dx.doi.org/10.1142/s0219265901000439.

Full text
Abstract:
Existing user-level network interfaces deliver high bandwidth, low latency performance to applications, but are typically unable to support diverse styles of communication and are unsuitable for use in multiprogrammed environments. Often this is because the network abstraction is presented at too high a level, and support for synchronisation is inflexible. In this paper we present a new primitive for in-band synchronisation: the Tripwire. Tripwires provide a flexible, efficient and scalable means for synchronisation that is orthogonal to data transfer. We describe the implementation of a non-coherent distributed shared memory network interface, with Tripwires for synchronisation. This interface provides a low-level communications model with gigabit class bandwidth and very low overhead and latency. We show how it supports a variety of communication styles, including remote procedure call, message passing and streaming.
APA, Harvard, Vancouver, ISO, and other styles
6

ALMILADI, ABDURAZZAG, and MOHAMAD IBRAHIM. "HIGH PERFORMANCE SCALABLE RADIX-2n GF(2m) SERIAL–SERIAL MULTIPLIERS." Journal of Circuits, Systems and Computers 18, no. 01 (February 2009): 11–30. http://dx.doi.org/10.1142/s0218126609004892.

Full text
Abstract:
In this paper, a new architecture for radix-2n serial–serial multiplication/reduction for the finite field GF(2m) is presented. The input operands are serially entered one digit at a time and the output result is computed serially one digit at a time. The reduction polynomial is also fed serially to the structure so that changing the reduction polynomial will not require rewriting or rewiring the structure. The structure utilizes a serial transfer which reduces the bus width needed to transfer data back and forth between memory and multiplication unit. The structure possesses features of regularity, modularity and scalability which are a design requirement for an efficient utilization of FPGA resources. Also, a systolic scalable area efficient design which provides a 50% reduction in hardware without degrading the speed performance is proposed. A radix-4 version of the proposed architecture has been designed, simulated and synthesized using Xilinx ISE 10.1 targeting a Xilinx Virtex-5 FPGA.
APA, Harvard, Vancouver, ISO, and other styles
7

Imperatore, Pasquale, and Eugenio Sansosti. "Multithreading Based Parallel Processing for Image Geometric Coregistration in SAR Interferometry." Remote Sensing 13, no. 10 (May 18, 2021): 1963. http://dx.doi.org/10.3390/rs13101963.

Full text
Abstract:
Within the framework of multi-temporal Synthetic Aperture Radar (SAR) interferometric processing, image coregistration is a fundamental operation that might be extremely time-consuming. This paper explores the possibility of addressing fast and accurate SAR image geometric coregistration, with sub-pixel accuracy and in the presence of a complex 3-D object scene, by exploiting the parallelism offered by shared-memory architectures. An efficient and scalable processor is proposed by designing a parallel algorithm incorporating thread-level parallelism for solving the inherent computationally intensive problem. The adopted functional scheme is first mathematically framed and then investigated in detail in terms of its computational structures. Subsequently, a parallel version of the algorithm is designed, according to a fork-join model, by suitably taking into account the granularity of the decomposition, load-balancing, and different scheduling strategies. The developed parallel algorithm implements parallelism at the thread-level by using OpenMP (Open Multi-Processing) and it is specifically targeted at shared-memory multiprocessors. The parallel performance of the implemented multithreading-based SAR image coregistration prototype processor is experimentally investigated and quantitatively assessed by processing high-resolution X-band COSMO-SkyMed SAR data and using two different multicore architectures. The effectiveness of the developed multithreaded prototype solution in fully benefitting from the computing power offered by multicore processors has successfully been demonstrated via a suitable experimental performance analysis conducted in terms of parallel speedup and efficiency. The demonstrated scalable performance and portability of the developed parallel processor confirm its potential for operational use in the interferometric SAR data processing at large scales.
APA, Harvard, Vancouver, ISO, and other styles
8

ALMILADI, ABDURAZZAG SULAIMAN. "HIGH PERFORMANCE SCALABLE MIXED-RADIX-2n SERIAL-SERIAL MULTIPLIERS FOR GF(2m)." Journal of Circuits, Systems and Computers 19, no. 05 (August 2010): 1089–107. http://dx.doi.org/10.1142/s0218126610006621.

Full text
Abstract:
In this paper, two new high performance bidirectional mixed radix-2n serial-serial multipliers for the finite field GF (2m) are presented. The input operands are serially entered one digit at a time for the first operand and two digits at a time for the second operand. The output result is computed serially one digit at a time. The reduction polynomial is also fed serially to the structure in the same manner so that changing the reduction polynomial will not require rewriting or rewiring the structure. The structures utilize a serial transfer which reduces the bus width needed to transfer data back and forth between memory and multiplication unit. The structures possess features of regularity, modularity and scalability which are a design requirement for an efficient utilization of FPGA resources. The new twin pipe design has improved the area-time performance by ~37% when compared with the best existing radix-2n serial-serial multipliers for the finite field GF (2m) . Furthermore, it is the first twin pipe bidirectional radix-2n serial-serial multiplier for the finite field GF (2m) reported in the literature. The twin pipe multiplier can be used to perform two successive K-digit multiplications in 2K + 6 cycles without truncating the results. As a consequence, a new data can be fed into the multiplier every K + 3 cycles. A radix-4 version of the proposed architecture has been designed, simulated and synthesized using Xilinx ISE 10.1 targeting a Xilinx Virtex-5 FPGA.
APA, Harvard, Vancouver, ISO, and other styles
9

KIM, KEONWOOK, and ALAN D. GEORGE. "PARALLEL SUBSPACE PROJECTION BEAMFORMING FOR AUTONOMOUS, PASSIVE SONAR SIGNAL PROCESSING." Journal of Computational Acoustics 11, no. 01 (March 2003): 55–74. http://dx.doi.org/10.1142/s0218396x0300181x.

Full text
Abstract:
Adaptive techniques can be applied to improve performance of a beamformer in a cluttered environment. The sequential implementation of an adaptive beamformer, for many sensors and over a wide band of frequencies, presents a serious computational challenge. By coupling each transducer node with a microprocessor, in-situ parallel processing applied to an adaptive beamformer on a distributed system can glean advantages in execution speed, fault tolerance, scalability, and cost. In this paper, parallel algorithms for Subspace Projection Beamforming (SPB), using QR decomposition on distributed systems, are introduced for in-situ signal processing. Performance results from parallel and sequential algorithms are presented using a distributed system testbed comprised of a cluster of computers connected by a network. The execution times, parallel efficiencies, and memory requirements of each parallel algorithm are presented and analyzed. The results of these analyses demonstrate that parallel in-situ processing holds the potential to meet the needs of future advanced beamforming algorithms in a scalable fashion.
APA, Harvard, Vancouver, ISO, and other styles
10

Ma, Yan, Jie Song, and Zhixin Zhang. "In-Memory Distributed Mosaicking for Large-Scale Remote Sensing Applications with Geo-Gridded Data Staging on Alluxio." Remote Sensing 14, no. 23 (November 25, 2022): 5987. http://dx.doi.org/10.3390/rs14235987.

Full text
Abstract:
The unprecedented availability of petascale analysis-ready earth observation data has given rise to a remarkable surge in demand for regional to global environmental studies, which exploit tons of data for temporal–spatial analysis at a much larger scale than ever. Imagery mosaicking, which is critical for forming “One Map” with a continuous view for large-scale climate research, has drawn significant concern. However, despite employing distributed data processing engines such as Spark, large-scale data mosaicking still significantly suffers from a staggering number of remote sensing images which could inevitably lead to discouraging performance. The main ill-posed problem of traditional parallel mosaicking algorithms is inherent in the huge computation demand and incredible heavy data I/O burden resulting from intensively shifting tremendous RS data back and forth between limited local memory and bulk external storage throughout the multiple processing stages. To address these issues, we propose an in-memory Spark-enabled distributed data mosaicking at a large scale with geo-gridded data staging accelerated by Alluxio. It organizes enormous “messy” remote sensing datasets into geo-encoded gird groups and indexes them with multi-dimensional space-filling curves geo-encoding assisted by GeoTrellis. All the buckets of geo-grided remote sensing data groups could be loaded directly from Alluxio with data prefetching and expressed as RDDs implemented concurrently as grid tasks of mosaicking on top of the Spark-enabled cluster. It is worth noticing that an in-memory data orchestration is offered to facilitate in-memory big data staging among multiple mosaicking processing stages to eliminate the tremendous data transferring at a great extent while maintaining a better data locality. As a result, benefiting from parallel processing with distributed data prefetching and in-memory data staging, this is a much more effective approach to facilitate large-scale data mosaicking in the context of big data. Experimental results have demonstrated our approach is much more efficient and scalable than the traditional ways of parallel implementing.
APA, Harvard, Vancouver, ISO, and other styles
11

Hu, Ning, Marcos A. Cheney, Younes Hanifehpour, Sang Woo Joo, and Bong-Ki Min. "Synthesis, Characterization, and Catalytic Performance of Sb2Se3 Nanorods." Journal of Nanomaterials 2017 (2017): 1–8. http://dx.doi.org/10.1155/2017/5385908.

Full text
Abstract:
Antimony selenide has many potential applications in thermoelectric, photovoltaic, and phase-change memory devices. A novel method is described for the rapid and scalable preparation of antimony selenide (Sb2Se3) nanorods in the presence of hydrazine hydrate and/or permanganate at 40°C. Crystalline nanorods are obtained by the addition of hydrazine hydrate in a reaction mixture of antimony acetate and/or chloride and sodium selenite in neutral and basic media, while amorphous nanoparticles are formed by the addition of KMnO4 in a reaction mixture of antimony acetate/chloride and sodium selenite. The powder X-ray diffraction pattern confirms orthorhombic phase crystalline Sb2Se3 for the first and second reactions with lattice parameters a=1.120 nm, b=1.128 nm, and c=0.383 nm and amorphous Sb2Se3 for the third reaction. Scanning electron microscopy (SEM), transmission electron microscopy (TEM), and high-resolution TEM (HRTEM) images show the diameter of nanorods for the first and second reactions to be in the order of 100 nm to 150 nm and about 20 nm particles for the third reaction. EDX and XPS suggest that the nanorods are pure Sb2Se3. The UV-vis analysis indicates a band gap of 4.14 and 4.97 eV for the crystalline and amorphous Sb2Se3, respectively, corresponding to a blue shift. The photocatalytic study shows that the decolorization of Rhodamine in solution by nanoparticles is slightly greater than nanorods.
APA, Harvard, Vancouver, ISO, and other styles
12

Yang, Airong, and Guoxin Yu. "Application of Heterogeneous Network Oriented to NoSQL Database in Optimal Postevaluation Indexes of Construction Projects." Discrete Dynamics in Nature and Society 2022 (January 13, 2022): 1–10. http://dx.doi.org/10.1155/2022/4817300.

Full text
Abstract:
With the advent of the Internet Web 2.0 era, storage devices used to store website data are developing at an ever-increasing high-growth rate and a diversified trend. The focus on the structured data storage model has reduced the responsiveness of traditional relational databases to data changes. NoSQL database is scalable, has a powerful and flexible data model and a large amount of data, and has an increasing application potential in the memory field. Heterogeneous networks are composed of third-party computers, network equipment, and systems. Network types are usually used for other protocols to support other functions and applications. The research on heterogeneous networks can be traced back to the BARWAN project that started in 1995 at the University of California, Berkeley. The project leader RHKatz merged multiple types of nested networks for the first time to form heterogeneous network requirements for various future terminal services. Construction engineering refers to an engineering entity formed by installing pipelines and equipment that support the construction of various houses and ancillary facilities. “House construction” refers to projects with roofs, beams, columns, walls, and foundations that can form internal spaces to meet people’s needs in production, living, learning, and public activities. Among them, the engineering evaluation index is a statistical index used to evaluate and compare the quality and effects of social and economic activities through the use of equipment, such as capital turnover rate and employee labor efficiency. It is the exchange of corporate performance evaluation content and the expression of corporate performance evaluation content.
APA, Harvard, Vancouver, ISO, and other styles
13

Suresh, Vignesh, Meiyu Stella Huang, Madapusi P. Srinivasan, and Sivashankar Krishnamoorthy. "High Density Metal Oxide (ZnO) Nanopatterned Platforms for Electronic Applications." MRS Proceedings 1498 (2013): 255–61. http://dx.doi.org/10.1557/opl.2013.344.

Full text
Abstract:
ABSTRACTFabrication methodologies with high precision and tenability for nanostructures of metal and metal oxides are widely explored for engineering devices such as solar cells, sensors, non-volatile memories (NVM) etc. In this direction, metal and metal oxide nanopatterned arrays are the state-of-the-art platforms upon which the device structures are built where the tunable orderly arrangement of the nanostructures enhances the device performance. We describe here a coalition of fabrication protocols that employ block copolymer self-assembly and nanoimprint lithography (NIL) to obtain metal oxide nanopatterns with sub-100 nm spatial resolution. The protocols are easily scalable down to sub-50 nm and below.Nanopatterned arrays of ZnO created by using NIL assisted templates through area selective atomic layer deposition (ALD) and radio frequency (RF) sputtering find application in NVM and photovoltaics. We have employed NIL that produced nanoporous polymer templates using Si molds derived from block copolymer lithography (BCL) for pattern transfer into ZnO. The resulting ZnO nanoarrays were highly dense (8.6 x 109 nanofeatures per cm2) exhibiting periodic feature to feature spacing and width that replicated the geometric attributes of the template. Such nanopatterns find application in NVM, where a change in the density and periodicity of the arrays influences the charge storage characteristics. The above assembly and patterning protocols were employed to fabricate metal-oxide-semiconductor (MOS) capacitor devices for investigating application in NVM. Patterned ZnO nanoarrays were used as charge storage centres for the MOS capacitor devices. Preliminary results upon investigating the flash memory performance showed good flat-band voltage hysteresis window at a relatively low operating voltage due to high charge trap density.
APA, Harvard, Vancouver, ISO, and other styles
14

Smith, Ken, Peter Hanaway, Mike Jolley, Reed Gleason, and Eric Strid. "3D-TSV Test Options and Process Compatibility." Additional Conferences (Device Packaging, HiTEC, HiTEN, and CICMT) 2011, DPC (January 1, 2011): 002199–225. http://dx.doi.org/10.4071/2011dpc-tha16.

Full text
Abstract:
3D-TSV stacking with non-reworkable bonding processes implies known-good die screening with high test coverage to be economical.Depending upon the stack architecture, the need to contact TSVs directly or TSV bonding pads during wafer test ranges from minimal to mandatory. An example of minimal need to contact is a low-power logic or memory chip powered and tested through a small number of test pads for scan chains or built-in self-test (BIST) signals. Examples of mandatory TSV pad contact during wafer test include power delivery pins (up to thousands), analog or RF I/Os, stacked logic functions, or more complex requirements.With appropriate design, test throughput (and therefore test cost) can dramatically benefit from contacting the many more I/Os available at the TSVs or TSV pads, just as the speed of the stack benefits from the many more I/Os when in mission mode.Probes and probing processes that touch TSVs and thin pads with minimal forces and pad marking are described herein. Feasibility of tip forces ~1 gram or less and very low pad damage are demonstrated at 40 micron array pitch. The lithographically printed probe structures are scalable to even smaller pitches, and fabrication costs scale by probe area, not by pincount.The degree of pad marking from probe tips depends on pad material choices. A non-oxidizing metal surface, such as ENIG, requires <0.1 gm for good contact, thus disturbing the surface so little that marks are difficult to detect. By contrast, a Sn surface requires ~1 gm for low and consistent contact resistance. Probe contacting at TSV pitches is practical with evolutions of existing probe technology, and enables test strategies which probe some or all of the TSV pads, whether on the face or back of the wafer.
APA, Harvard, Vancouver, ISO, and other styles
15

Gong, Xiao, Kaizhen Han, Chen Sun, Zijie Zheng, Qiwen Kong, Yuye Kang, Chengkuan Wang, et al. "Beol-Compatible Ingazno-Based Devices for 3D Integrated Circuits." ECS Meeting Abstracts MA2022-02, no. 32 (October 9, 2022): 1186. http://dx.doi.org/10.1149/ma2022-02321186mtgabs.

Full text
Abstract:
Due to its attractive materials and electrical properties, indium-gallium-zinc-oxide (IGZO) has been extensively researched in many emerging technologies, especially for three-dimensional (3D) monolithic integration and back-end-of-line (BEOL) compatible applications [1]. On the pathway toward the realization of high-performance 3D monolithic integrated chips (ICs), a wide range of building blocks with different functionalities are required. 3D monolithic ICs also demand optimization in device performance and circuit architecture design. In this paper, we discuss our recent research development in IGZO-based techniques at both device and circuit levels. This includes nanowire structure for IGZO-based transistors, as well as the BEOL-compatible ferroelectric ternary content-addressable memory (TCAM) and embedded dynamic random-access-memory (eDRAM) for compute-in-memory (CiM) using IGZO-based transistors. A novel digital etch technique for amorphous IGZO (α-IGZO) material as well as the formation of α-IGZO nanowires were realized, enabling high performance α-IGZO nanowire field-effect transistors (NWFETs) with ultra-scaled nanowire width (W NW) [2]. The scanning electron microscopy (SEM) images of α-IGZO nanowire before and after the digital etch show that the nanowire structure as well as W NW reduction after digital etch can be clearly observed. The smallest α-IGZO nanowire after digital etch has a W NW of ~20 nm. By leveraging the ultra-scaled nanowire structure, the NWFET with the smallest W NW achieves decent subthreshold swing of 80 mV/decade as well as high peak extrinsic transconductance (G m,ext) of 612 μS/μm at a drain to source voltage (V DS) = 2 V (456 μS/μm at V DS = 1 V). As compared with previous works in literature, our IGZO NWFET achieves one of the highest peak G m among all IGZO-based FETs. α-IGZO ferroelectric FETs (Fe-FETs) with a metal-ferroelectric-metal-oxide-semiconductor (MFMIS) structure were further realized based on the α-IGZO transistor process modules. The smallest L CH is as small as 40 nm. The cross-sectional transmission electron microscopy (TEM) image of the device shows sharp interface. The α-IGZO Fe-FETs achieve a large memory window of 2.9 V, high endurance of 108 cycles, high conductance ratio, and small cycle-to-cycle variation. By leveraging the low temperature processed α-IGZO Fe-FETs with good electrical characteristics, a BEOL-compatible ferroelectric TCAM circuit with 2 Fe-FETs connected in parallel was realized [3], showing an extremely large sensing margin. In addition, such α-IGZO Fe-FET TCAM reduces the transistor number from 16 to 2 as compared to traditional SRAM-based TCAM. Smaller cell size and higher energy efficiency can also be obtained. IGZO transistors can play an important role in in-memory computing as well. SEM image of the eDRAM CiM cell shows utilization of IGZO transistors. The smallest device has L CH of 45 nm [4]. The IGZO transistor-based eDRAM CiM with differential cell structure achieves low leakage current, low variation, low charge loss sensitivity, and the control-friendly charge-domain computing without DC power. By evaluating the key figure-of-merits, including precision, power efficiency, computing density, retention time, and robustness, it can be concluded that our IGZO transistor-based eDRAM CiM is promising for low-power and scalable compute-in-eDRAM design. Acknowledgments: This work is supported by Singapore Ministry of Education (Tier 2: MOE2018-T2-2-154, Tier 1: R-263-000-D65-114). References: [1] K. Normura et al., Nature, 432 (7016), 488-492, 2004. [2] K. Han et al., VLSI, 2021, p. T10-1. [3] C. Sun et al., VLSI, 2021, p. T7-4. [4] J. Liu et al., IEDM, 2021, p. 462.
APA, Harvard, Vancouver, ISO, and other styles
16

Nguyen, Ngoc-Anh, Olivier Schneegans, Jouhaiz Rouchou, Raphael Salot, Yann Lamy, Jean-Marc Boissel, Marjolaine Allain, Sylvain Poulet, and Sami Oukassi. "(G02 Best Presentation Award Winner) Elaboration and Characterization of CMOS Compatible, Pico-Joule Energy Consumption, Electrochemical Synaptic Transistors for Neuromorphic Computing." ECS Meeting Abstracts MA2022-01, no. 29 (July 7, 2022): 1293. http://dx.doi.org/10.1149/ma2022-01291293mtgabs.

Full text
Abstract:
Non-Von Neumann computing application constituted by artificial synapses based on electrochemical random-access memory (ECRAM) has aroused tremendous attention owing to its capability to perform parallel operations, thus reducing the cost of time and energy spent [1-3]. Existing ECRAM synapses comprise two-terminal memristors and three-terminal synaptic transistors (SynT). While low cost, scalability, and high density are the highlights for memristors, their nonlinear, asymmetric state modulation, high ON current withdrawal, and sneak path in crossbar array integration prevent them from becoming the ideal synaptic elements for artificial neural networks (ANN) [4]. SynT configuration, on the other hand, offers an additional electrolyte-gated control from which ion doping content can be monitored via redox reactions, thus decoupling write-read actions and improving the linearity of programming states [5-6]. Nevertheless, existing SynTs suffer from different integration issues stemming from liquid-based ionic conductors and manually exfoliated channels. Moreover, several kinds of SynTs possess highly conductive channels in the range of µS to mS, significantly scaling up the energy spent for analog states reading. Despite having numerous communications on the performance of different ECRAM, a comprehensive electrochemical view of ion intercalation into the active material, the main root of conductance modulation, is clearly missing. In this work, we present the elaboration procedure of an all-solid-state synaptic transistor composed of nanoscale electrolyte and channel layers. The devices have been elaborated on 8’’ Silicon wafers using microfabrication processes compatible with conventional semiconductor technology and CMOS back end of line (BEoL) integration. (Figure 1a) We demonstrate the excellent synaptic plasticity properties of short-term potentiation (STP) and long-term potentiation (LTP) of our SynT. We performed tests to study the correlation between linearity, asymmetry, and the number of analog states. By averaging the amount of injected ions per write operation, we estimated the energy consumed for switching among adjacent states of this device is 22.5 pJ, yielding area-normalized energy of 4 fJ/µm2. In addition, operating in the range of nS, our SynTs meet the critical criteria of low energy consumption for both write and read operations. Endurance was highlighted by cycling in ambient conditions with 100 states of potentiation and depression for over 1000 cycles with only a slight variation of Gmax/Gmin ratio of 6.2 % (Figure 1b, c). Approximately 95 % accuracy in MNIST pattern recognition test on ANN in the crossbar array configuration has been obtained by simulation with SynTs as synaptic elements reassured SynT is a promising candidate for future neuromorphic computing hardware. To shed light on the properties of intercalation phenomena of Li ions into the TiO2 layer, a further electrochemical study on a cell comprising Ti/TiO2/LiPON/Li corresponding to the SynT gate stack was performed. This understanding will help to elucidate the correlation with conductance modulation characteristics for a synaptic transistor. Multiple tests were carried out, including cyclic voltammetry (CV) with different scan rates, rate capability with Galvanostatic cycling with potential limit (GCPL), and electrochemical impedance spectroscopy (EIS) on different states of charge. A circuit model was introduced to fit the frequency response of the cell, and it explained well the behavior of charging capability at different OCV (Figure 1d). References [1] P. Narayanan et al., “Toward on-chip acceleration of the backpropagation algorithm using nonvolatile memory,” IBM J. Res. Dev., vol. 61, no. 4/5, p. 11:1-11:11, Jul. 2017, doi: 10.1147/JRD.2017.2716579. [2] J. Tang et al., “ECRAM as Scalable Synaptic Cell for High-Speed, Low-Power Neuromorphic Computing,” in 2018 IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, Dec. 2018, p. 13.1.1-13.1.4. doi: 10.1109/IEDM.2018.8614551. [3] Y. Li et al., “In situ Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory,” Front. Neurosci., vol. 15, p. 636127, Apr. 2021, doi: 10.3389/fnins.2021.636127. [4] M. A. Zidan, H. A. H. Fahmy, M. M. Hussain, and K. N. Salama, “Memristor-based memory: The sneak paths problem and solutions,” Microelectron. J., vol. 44, no. 2, pp. 176–183, Feb. 2013, doi: 10.1016/j.mejo.2012.10.001. [5] Y. van de Burgt et al., “A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing,” Nat. Mater., vol. 16, no. 4, pp. 414–418, Apr. 2017, doi: 10.1038/nmat4856. [6] E. J. Fuller et al., “Li-Ion Synaptic Transistor for Low Power Analog Computing,” Adv. Mater., vol. 29, no. 4, p. 1604310, Jan. 2017, doi: 10.1002/adma.201604310. Figure 1
APA, Harvard, Vancouver, ISO, and other styles
17

Addimando, N., M. Engel, F. Schwarz, and M. Batič. "A DEEP LEARNING APPROACH FOR CROP TYPE MAPPING BASED ON COMBINED TIME SERIES OF SATELLITE AND WEATHER DATA." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2022 (May 31, 2022): 1301–8. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2022-1301-2022.

Full text
Abstract:
Abstract. Global Earth Monitor (GEM, Horizon 2020) takes advantage of the large volumes of available Earth Observation (EO), weather, climate and other non-EO data to establish economically viable continuous monitoring of the Earth. Within the GEM framework, the development of scalable and cost-effective solutions is being tested on several use-cases, with crop identification being one of them.Crop identification uses a combination of EO and weather data to enable automatic identification of crops. The use case supports operational decisions when managing crops and the monitoring of actual vs. planned or reported agricultural land use (e.g., Common Agricultural Policy monitoring). Satellite data and weather data come at very different temporal and spatial resolutions: Sentinel-2 constellation nominally provides an observation of a field every 5 days at 10 m spatial resolution, while weather data has continuous hourly time series at multi-km spatial resolution. We have designed ad-hoc routines to spatially aggregate satellite data at field level and to systematically compose layers of different time discretization series, so that each EO is associated with a complete time series (of opportune length) of weather variables at daily resolution. For each field, we extract the time series of the median over field pixels of Sentinel-2 L1C bands, cloud mask and cloud probability. For doing this we take advantage of Sentinel Hub's Statistical API (Sinergise, 2020), that enables the retrieval of statistics of band values and derived indices over a specified geographic area and time range. Using meteoblue dataset API (meteoblue, 2017), complete time series of daily weather data (NEMS4 model, meteoblue, 2008) are then associated to each field observation, following the systematic layer composition approach mentioned above. An opportune time series length is defined for each of the 17 weather variables we considered. To handle this kind of multi-dimensional layered data, we use a flexible encoding-decoding framework (FlexMod, designed by TUM as part of GEM project): multiple encoders are designed for features of different time length (namely EO data and weather variables) and are then passed to the decoder via a mediator. Thanks to the flexible design of FlexMod framework, different models and architectures can be easily tested by simply defining new encoders and/or decoders. We present results obtained on a dataset in Slovenia, where crop fields are labelled according to a Hierarchical Crop and Agriculture Taxonomy (HCAT). This taxonomy, based on the EAGLE-Matrix and EU regulations, is the one adopted in the EuroCrops project (Schneider et al. 2021). The classification of field crops takes advantage of Sentinel-2 satellite data and Numerical Weather Prediction model output data. We exploit the potential of FlexMod to test different feature extractors, temporal encoding frameworks and decoders and we present a comparison between results obtained training a long-short term memory (LSTM) implementation (Breizhcrops, Rußwurm et al. 2020) and a Self-attention transformer model (Vaswani et al. 2017), the latter showing the best performances with accuracy 0.904 and Cohen’s kappa 0.824. We moreover investigate the role of weather data by benchmarking results against those obtained with just satellite imagery. To better appraise the influence of the weather data we analyse how perturbing weather data in the testing dataset affects the final results. So far, we obtain in both cases very similar accuracies and Cohen’s kappa. A deeper analysis of crop-specific scores (precision, recall, F1) suggests that the training and testing datasets are too limited in terms of size and crop variability to draw any general conclusion over the role of weather. As future developments, once the EuroCrops datasets are ready, we plan to expand the training and testing dataset to cover a higher variability of climatological areas and increase the numerosity of the so far under-represented crops, in the attempt to draw more general conclusions around the influence of weather and the predictability of specific crop classes. Moreover, given the encouraging scores, we aim to perform crop type mapping at least at European scale, thanks to the availability of the EuroCrops data and the cost-effective big data solutions developed during GEM project.
APA, Harvard, Vancouver, ISO, and other styles
18

Xing, Fei, Yi Ping Yao, Zhi Wen Jiang, and Bing Wang. "Fine-Grained Parallel and Distributed Spatial Stochastic Simulation of Biological Reactions." Advanced Materials Research 345 (September 2011): 104–12. http://dx.doi.org/10.4028/www.scientific.net/amr.345.104.

Full text
Abstract:
To date, discrete event stochastic simulations of large scale biological reaction systems are extremely compute-intensive and time-consuming. Besides, it has been widely accepted that spatial factor plays a critical role in the dynamics of most biological reaction systems. The NSM (the Next Sub-Volume Method), a spatial variation of the Gillespie’s stochastic simulation algorithm (SSA), has been proposed for spatially stochastic simulation of those systems. While being able to explore high degree of parallelism in systems, NSM is inherently sequential, which still suffers from the problem of low simulation speed. Fine-grained parallel execution is an elegant way to speed up sequential simulations. Thus, based on the discrete event simulation framework JAMES II, we design and implement a PDES (Parallel Discrete Event Simulation) TW (time warp) simulator to enable the fine-grained parallel execution of spatial stochastic simulations of biological reaction systems using the ANSM (the Abstract NSM), a parallel variation of the NSM. The simulation results of classical Lotka-Volterra biological reaction system show that our time warp simulator obtains remarkable parallel speed-up against sequential execution of the NSM.I.IntroductionThe goal of Systems biology is to obtain system-level investigations of the structure and behavior of biological reaction systems by integrating biology with system theory, mathematics and computer science [1][3], since the isolated knowledge of parts can not explain the dynamics of a whole system. As the complement of “wet-lab” experiments, stochastic simulation, being called the “dry-computational” experiment, plays a more and more important role in computing systems biology [2]. Among many methods explored in systems biology, discrete event stochastic simulation is of greatly importance [4][5][6], since a great number of researches have present that stochasticity or “noise” have a crucial effect on the dynamics of small population biological reaction systems [4][7]. Furthermore, recent research shows that the stochasticity is not only important in biological reaction systems with small population but also in some moderate/large population systems [7].To date, Gillespie’s SSA [8] is widely considered to be the most accurate way to capture the dynamics of biological reaction systems instead of traditional mathematical method [5][9]. However, SSA-based stochastic simulation is confronted with two main challenges: Firstly, this type of simulation is extremely time-consuming, since when the types of species and the number of reactions in the biological system are large, SSA requires a huge amount of steps to sample these reactions; Secondly, the assumption that the systems are spatially homogeneous or well-stirred is hardly met in most real biological systems and spatial factors play a key role in the behaviors of most real biological systems [19][20][21][22][23][24]. The next sub-volume method (NSM) [18], presents us an elegant way to access the special problem via domain partition. To our disappointment, sequential stochastic simulation with the NSM is still very time-consuming, and additionally introduced diffusion among neighbor sub-volumes makes things worse. Whereas, the NSM explores a very high degree of parallelism among sub-volumes, and parallelization has been widely accepted as the most meaningful way to tackle the performance bottleneck of sequential simulations [26][27]. Thus, adapting parallel discrete event simulation (PDES) techniques to discrete event stochastic simulation would be particularly promising. Although there are a few attempts have been conducted [29][30][31], research in this filed is still in its infancy and many issues are in need of further discussion. The next section of the paper presents the background and related work in this domain. In section III, we give the details of design and implementation of model interfaces of LP paradigm and the time warp simulator based on the discrete event simulation framework JAMES II; the benchmark model and experiment results are shown in Section IV; in the last section, we conclude the paper with some future work.II. Background and Related WorkA. Parallel Discrete Event Simulation (PDES)The notion Logical Process (LP) is introduced to PDES as the abstract of the physical process [26], where a system consisting of many physical processes is usually modeled by a set of LP. LP is regarded as the smallest unit that can be executed in PDES and each LP holds a sub-partition of the whole system’s state variables as its private ones. When a LP processes an event, it can only modify the state variables of its own. If one LP needs to modify one of its neighbors’ state variables, it has to schedule an event to the target neighbor. That is to say event message exchanging is the only way that LPs interact with each other. Because of the data dependences or interactions among LPs, synchronization protocols have to be introduced to PDES to guarantee the so-called local causality constraint (LCC) [26]. By now, there are a larger number of synchronization algorithms have been proposed, e.g. the null-message [26], the time warp (TW) [32], breath time warp (BTW) [33] and etc. According to whether can events of LPs be processed optimistically, they are generally divided into two types: conservative algorithms and optimistic algorithms. However, Dematté and Mazza have theoretically pointed out the disadvantages of pure conservative parallel simulation for biochemical reaction systems [31]. B. NSM and ANSM The NSM is a spatial variation of Gillespie’ SSA, which integrates the direct method (DM) [8] with the next reaction method (NRM) [25]. The NSM presents us a pretty good way to tackle the aspect of space in biological systems by partitioning a spatially inhomogeneous system into many much more smaller “homogeneous” ones, which can be simulated by SSA separately. However, the NSM is inherently combined with the sequential semantics, and all sub-volumes share one common data structure for events or messages. Thus, directly parallelization of the NSM may be confronted with the so-called boundary problem and high costs of synchronously accessing the common data structure [29]. In order to obtain higher efficiency of parallel simulation, parallelization of NSM has to firstly free the NSM from the sequential semantics and secondly partition the shared data structure into many “parallel” ones. One of these is the abstract next sub-volume method (ANSM) [30]. In the ANSM, each sub-volume is modeled by a logical process (LP) based on the LP paradigm of PDES, where each LP held its own event queue and state variables (see Fig. 1). In addition, the so-called retraction mechanism was introduced in the ANSM too (see algorithm 1). Besides, based on the ANSM, Wang etc. [30] have experimentally tested the performance of several PDES algorithms in the platform called YH-SUPE [27]. However, their platform is designed for general simulation applications, thus it would sacrifice some performance for being not able to take into account the characteristics of biological reaction systems. Using the similar ideas of the ANSM, Dematté and Mazza have designed and realized an optimistic simulator. However, they processed events in time-stepped manner, which would lose a specific degree of precisions compared with the discrete event manner, and it is very hard to transfer a time-stepped simulation to a discrete event one. In addition, Jeschke etc.[29] have designed and implemented a dynamic time-window simulator to execution the NSM in parallel on the grid computing environment, however, they paid main attention on the analysis of communication costs and determining a better size of the time-window.Fig. 1: the variations from SSA to NSM and from NSM to ANSMC. JAMES II JAMES II is an open source discrete event simulation experiment framework developed by the University of Rostock in Germany. It focuses on high flexibility and scalability [11][13]. Based on the plug-in scheme [12], each function of JAMES II is defined as a specific plug-in type, and all plug-in types and plug-ins are declared in XML-files [13]. Combined with the factory method pattern JAMES II innovatively split up the model and simulator, which makes JAMES II is very flexible to add and reuse both of models and simulators. In addition, JAMES II supports various types of modelling formalisms, e.g. cellular automata, discrete event system specification (DEVS), SpacePi, StochasticPi and etc.[14]. Besides, a well-defined simulator selection mechanism is designed and developed in JAMES II, which can not only automatically choose the proper simulators according to the modeling formalism but also pick out a specific simulator from a serious of simulators supporting the same modeling formalism according to the user settings [15].III. The Model Interface and SimulatorAs we have mentioned in section II (part C), model and simulator are split up into two separate parts. Thus, in this section, we introduce the designation and implementation of model interface of LP paradigm and more importantly the time warp simulator.A. The Mod Interface of LP ParadigmJAMES II provides abstract model interfaces for different modeling formalism, based on which Wang etc. have designed and implemented model interface of LP paradigm[16]. However, this interface is not scalable well for parallel and distributed simulation of larger scale systems. In our implementation, we accommodate the interface to the situation of parallel and distributed situations. Firstly, the neighbor LP’s reference is replaced by its name in LP’s neighbor queue, because it is improper even dangerous that a local LP hold the references of other LPs in remote memory space. In addition, (pseudo-)random number plays a crucial role to obtain valid and meaningful results in stochastic simulations. However, it is still a very challenge work to find a good random number generator (RNG) [34]. Thus, in order to focus on our problems, we introduce one of the uniform RNGs of JAMES II to this model interface, where each LP holds a private RNG so that random number streams of different LPs can be independent stochastically. B. The Time Warp SimulatorBased on the simulator interface provided by JAMES II, we design and implement the time warp simulator, which contains the (master-)simulator, (LP-)simulator. The simulator works strictly as master/worker(s) paradigm for fine-grained parallel and distributed stochastic simulations. Communication costs are crucial to the performance of a fine-grained parallel and distributed simulation. Based on the Java remote method invocation (RMI) mechanism, P2P (peer-to-peer) communication is implemented among all (master-and LP-)simulators, where a simulator holds all the proxies of targeted ones that work on remote workers. One of the advantages of this communication approach is that PDES codes can be transferred to various hardwire environment, such as Clusters, Grids and distributed computing environment, with only a little modification; The other is that RMI mechanism is easy to realized and independent to any other non-Java libraries. Since the straggler event problem, states have to be saved to rollback events that are pre-processed optimistically. Each time being modified, the state is cloned to a queue by Java clone mechanism. Problem of this copy state saving approach is that it would cause loads of memory space. However, the problem can be made up by a condign GVT calculating mechanism. GVT reduction scheme also has a significant impact on the performance of parallel simulators, since it marks the highest time boundary of events that can be committed so that memories of fossils (processed events and states) less than GVT can be reallocated. GVT calculating is a very knotty for the notorious simultaneous reporting problem and transient messages problem. According to our problem, another GVT algorithm, called Twice Notification (TN-GVT) (see algorithm 2), is contributed to this already rich repository instead of implementing one of GVT algorithms in reference [26] and [28].This algorithm looks like the synchronous algorithm described in reference [26] (pp. 114), however, they are essentially different from each other. This algorithm has never stopped the simulators from processing events when GVT reduction, while algorithm in reference [26] blocks all simulators for GVT calculating. As for the transient message problem, it can be neglect in our implementation, because RMI based remote communication approach is synchronized, that means a simulator will not go on its processing until the remote the massage get to its destination. And because of this, the high-costs message acknowledgement, prevalent over many classical asynchronous GVT algorithms, is not needed anymore too, which should be constructive to the whole performance of the time warp simulator.IV. Benchmark Model and Experiment ResultsA. The Lotka-Volterra Predator-prey SystemIn our experiment, the spatial version of Lotka-Volterra predator-prey system is introduced as the benchmark model (see Fig. 2). We choose the system for two considerations: 1) this system is a classical experimental model that has been used in many related researches [8][30][31], so it is credible and the simulation results are comparable; 2) it is simple but helpful enough to test the issues we are interested in. The space of predator-prey System is partitioned into a2D NXNgrid, whereNdenotes the edge size of the grid. Initially the population of the Grass, Preys and Predators are set to 1000 in each single sub-volume (LP). In Fig. 2,r1,r2,r3stand for the reaction constants of the reaction 1, 2 and 3 respectively. We usedGrass,dPreyanddPredatorto stand for the diffusion rate of Grass, Prey and Predator separately. Being similar to reference [8], we also take the assumption that the population of the grass remains stable, and thusdGrassis set to zero.R1:Grass + Prey ->2Prey(1)R2:Predator +Prey -> 2Predator(2)R3:Predator -> NULL(3)r1=0.01; r2=0.01; r3=10(4)dGrass=0.0;dPrey=2.5;dPredato=5.0(5)Fig. 2: predator-prey systemB. Experiment ResultsThe simulation runs have been executed on a Linux Cluster with 40 computing nodes. Each computing node is equipped with two 64bit 2.53 GHz Intel Xeon QuadCore Processors with 24GB RAM, and nodes are interconnected with Gigabit Ethernet connection. The operating system is Kylin Server 3.5, with kernel 2.6.18. Experiments have been conducted on the benchmark model of different size of mode to investigate the execution time and speedup of the time warp simulator. As shown in Fig. 3, the execution time of simulation on single processor with 8 cores is compared. The result shows that it will take more wall clock time to simulate much larger scale systems for the same simulation time. This testifies the fact that larger scale systems will leads to more events in the same time interval. More importantly, the blue line shows that the sequential simulation performance declines very fast when the mode scale becomes large. The bottleneck of sequential simulator is due to the costs of accessing a long event queue to choose the next events. Besides, from the comparison between group 1 and group 2 in this experiment, we could also conclude that high diffusion rate increased the simulation time greatly both in sequential and parallel simulations. This is because LP paradigm has to split diffusion into two processes (diffusion (in) and diffusion (out) event) for two interactive LPs involved in diffusion and high diffusion rate will lead to high proportional of diffusion to reaction. In the second step shown in Fig. 4, the relationship between the speedups from time warp of two different model sizes and the number of work cores involved are demonstrated. The speedup is calculated against the sequential execution of the spatial reaction-diffusion systems model with the same model size and parameters using NSM.Fig. 4 shows the comparison of speedup of time warp on a64X64grid and a100X100grid. In the case of a64X64grid, under the condition that only one node is used, the lowest speedup (a little bigger than 1) is achieved when two cores involved, and the highest speedup (about 6) is achieved when 8 cores involved. The influence of the number of cores used in parallel simulation is investigated. In most cases, large number of cores could bring in considerable improvements in the performance of parallel simulation. Also, compared with the two results in Fig. 4, the simulation of larger model achieves better speedup. Combined with time tests (Fig. 3), we find that sequential simulator’s performance declines sharply when the model scale becomes very large, which makes the time warp simulator get better speed-up correspondingly.Fig. 3: Execution time (wall clock time) of Seq. and time warp with respect to different model sizes (N=32, 64, 100, and 128) and model parameters based on single computing node with 8 cores. Results of the test are grouped by the diffusion rates (Group 1: Sequential 1 and Time Warp 1. dPrey=2.5, dPredator=5.0; Group 2: dPrey=0.25, dPredator=0.5, Sequential 2 and Time Warp 2).Fig. 4: Speedup of time warp with respect to the number of work cores and the model size (N=64 and 100). Work cores are chose from one computing node. Diffusion rates are dPrey=2.5, dPredator=5.0 and dGrass=0.0.V. Conclusion and Future WorkIn this paper, a time warp simulator based on the discrete event simulation framework JAMES II is designed and implemented for fine-grained parallel and distributed discrete event spatial stochastic simulation of biological reaction systems. Several challenges have been overcome, such as state saving, roll back and especially GVT reduction in parallel execution of simulations. The Lotka-Volterra Predator-Prey system is chosen as the benchmark model to test the performance of our time warp simulator and the best experiment results show that it can obtain about 6 times of speed-up against the sequential simulation. The domain this paper concerns with is in the infancy, many interesting issues are worthy of further investigated, e.g. there are many excellent PDES optimistic synchronization algorithms (e.g. the BTW) as well. Next step, we would like to fill some of them into JAMES II. In addition, Gillespie approximation methods (tau-leap[10] etc.) sacrifice some degree of precision for higher simulation speed, but still could not address the aspect of space of biological reaction systems. The combination of spatial element and approximation methods would be very interesting and promising; however, the parallel execution of tau-leap methods should have to overcome many obstacles on the road ahead.AcknowledgmentThis work is supported by the National Natural Science Foundation of China (NSF) Grant (No.60773019) and the Ph.D. Programs Foundation of Ministry of Education of China (No. 200899980004). The authors would like to show their great gratitude to Dr. Jan Himmelspach and Dr. Roland Ewald at the University of Rostock, Germany for their invaluable advice and kindly help with JAMES II.ReferencesH. Kitano, "Computational systems biology." Nature, vol. 420, no. 6912, pp. 206-210, November 2002.H. Kitano, "Systems biology: a brief overview." Science (New York, N.Y.), vol. 295, no. 5560, pp. 1662-1664, March 2002.A. Aderem, "Systems biology: Its practice and challenges," Cell, vol. 121, no. 4, pp. 511-513, May 2005. [Online]. Available: http://dx.doi.org/10.1016/j.cell.2005.04.020.H. de Jong, "Modeling and simulation of genetic regulatory systems: A literature review," Journal of Computational Biology, vol. 9, no. 1, pp. 67-103, January 2002.C. W. Gardiner, Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences (Springer Series in Synergetics), 3rd ed. Springer, April 2004.D. T. Gillespie, "Simulation methods in systems biology," in Formal Methods for Computational Systems Biology, ser. Lecture Notes in Computer Science, M. Bernardo, P. Degano, and G. Zavattaro, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5016, ch. 5, pp. 125-167.Y. Tao, Y. Jia, and G. T. Dewey, "Stochastic fluctuations in gene expression far from equilibrium: Omega expansion and linear noise approximation," The Journal of Chemical Physics, vol. 122, no. 12, 2005.D. T. Gillespie, "Exact stochastic simulation of coupled chemical reactions," Journal of Physical Chemistry, vol. 81, no. 25, pp. 2340-2361, December 1977.D. T. Gillespie, "Stochastic simulation of chemical kinetics," Annual Review of Physical Chemistry, vol. 58, no. 1, pp. 35-55, 2007.D. T. Gillespie, "Approximate accelerated stochastic simulation of chemically reacting systems," The Journal of Chemical Physics, vol. 115, no. 4, pp. 1716-1733, 2001.J. Himmelspach, R. Ewald, and A. M. Uhrmacher, "A flexible and scalable experimentation layer," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 827-835.J. Himmelspach and A. M. Uhrmacher, "Plug'n simulate," in 40th Annual Simulation Symposium (ANSS'07). Washington, DC, USA: IEEE, March 2007, pp. 137-143.R. Ewald, J. Himmelspach, M. Jeschke, S. Leye, and A. M. Uhrmacher, "Flexible experimentation in the modeling and simulation framework james ii-implications for computational systems biology," Brief Bioinform, vol. 11, no. 3, pp. bbp067-300, January 2010.A. Uhrmacher, J. Himmelspach, M. Jeschke, M. John, S. Leye, C. Maus, M. Röhl, and R. Ewald, "One modelling formalism & simulator is not enough! a perspective for computational biology based on james ii," in Formal Methods in Systems Biology, ser. Lecture Notes in Computer Science, J. Fisher, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5054, ch. 9, pp. 123-138. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-68413-8_9.R. Ewald, J. Himmelspach, and A. M. Uhrmacher, "An algorithm selection approach for simulation systems," pads, vol. 0, pp. 91-98, 2008.Bing Wang, Jan Himmelspach, Roland Ewald, Yiping Yao, and Adelinde M Uhrmacher. Experimental analysis of logical process simulation algorithms in james ii[C]// In M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, editors, Proceedings of the Winter Simulation Conference, IEEE Computer Science, 2009. 1167-1179.Ewald, J. Rössel, J. Himmelspach, and A. M. Uhrmacher, "A plug-in-based architecture for random number generation in simulation systems," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 836-844.J. Elf and M. Ehrenberg, "Spontaneous separation of bi-stable biochemical systems into spatial domains of opposite phases." Systems biology, vol. 1, no. 2, pp. 230-236, December 2004.K. Takahashi, S. Arjunan, and M. Tomita, "Space in systems biology of signaling pathways? Towards intracellular molecular crowding in silico," FEBS Letters, vol. 579, no. 8, pp. 1783-1788, March 2005.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.D. Ridgway, G. Broderick, and M. Ellison, "Accommodating space, time and randomness in network simulation," Current Opinion in Biotechnology, vol. 17, no. 5, pp. 493-498, October 2006.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.W. G. Wilson, A. M. Deroos, and E. Mccauley, "Spatial instabilities within the diffusive lotka-volterra system: Individual-based simulation results," Theoretical Population Biology, vol. 43, no. 1, pp. 91-127, February 1993.K. Kruse and J. Elf. Kinetics in spatially extended systems. In Z. Szallasi, J. Stelling, and V. Periwal, editors, System Modeling in Cellular Biology. From Concepts to Nuts and Bolts, pages 177–198. MIT Press, Cambridge, MA, 2006.M. A. Gibson and J. Bruck, "Efficient exact stochastic simulation of chemical systems with many species and many channels," The Journal of Physical Chemistry A, vol. 104, no. 9, pp. 1876-1889, March 2000.R. M. Fujimoto, Parallel and Distributed Simulation Systems (Wiley Series on Parallel and Distributed Computing). Wiley-Interscience, January 2000.Y. Yao and Y. Zhang, “Solution for analytic simulation based on parallel processing,” Journal of System Simulation, vol. 20, No.24, pp. 6617–6621, 2008.G. Chen and B. K. Szymanski, "Dsim: scaling time warp to 1,033 processors," in WSC '05: Proceedings of the 37th conference on Winter simulation. Winter Simulation Conference, 2005, pp. 346-355.M. Jeschke, A. Park, R. Ewald, R. Fujimoto, and A. M. Uhrmacher, "Parallel and distributed spatial simulation of chemical reactions," in 2008 22nd Workshop on Principles of Advanced and Distributed Simulation. Washington, DC, USA: IEEE, June 2008, pp. 51-59.B. Wang, Y. Yao, Y. Zhao, B. Hou, and S. Peng, "Experimental analysis of optimistic synchronization algorithms for parallel simulation of reaction-diffusion systems," High Performance Computational Systems Biology, International Workshop on, vol. 0, pp. 91-100, October 2009.L. Dematté and T. Mazza, "On parallel stochastic simulation of diffusive systems," in Computational Methods in Systems Biology, M. Heiner and A. M. Uhrmacher, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5307, ch. 16, pp. 191-210.D. R. Jefferson, "Virtual time," ACM Trans. Program. Lang. Syst., vol. 7, no. 3, pp. 404-425, July 1985.J. S. Steinman, "Breathing time warp," SIGSIM Simul. Dig., vol. 23, no. 1, pp. 109-118, July 1993. [Online]. Available: http://dx.doi.org/10.1145/174134.158473 S. K. Park and K. W. Miller, "Random number generators: good ones are hard to find," Commun. ACM, vol. 31, no. 10, pp. 1192-1201, October 1988.
APA, Harvard, Vancouver, ISO, and other styles
19

Lipatov, Alexey, Pradeep Chaudhary, Zhao Guan, Haidong Lu, Gang Li, Olivier Crégut, Kokou Dodzi Dorkenoo, et al. "Direct observation of ferroelectricity in two-dimensional MoS2." npj 2D Materials and Applications 6, no. 1 (March 15, 2022). http://dx.doi.org/10.1038/s41699-022-00298-5.

Full text
Abstract:
AbstractRecent theoretical predictions of ferroelectricity in two-dimensional (2D) van der Waals materials reveal exciting possibilities for their use in scalable low-power electronic devices with polarization-dependent functionalities. These prospects have been further invigorated by the experimental evidence of the polarization response in some transition metal chalcogenides (TMCs)—a group of narrow-band semiconductors and semimetals with a wealth of application potential. Among the TMCs, molybdenum disulfide (MoS2) is known as one of the most promising and robust 2D electronic materials. However, in spite of theoretical predictions, no ferroelectricity has been experimentally detected in MoS2, while the emergence of this property could enhance its potential for electronics applications. Here, we report the experimental observation of a stable room-temperature out-of-plane polarization ordering in 2D MoS2 layers, where polarization switching is realized by mechanical pressure induced by a tip of a scanning probe microscope. Using this approach, we create the bi-domain polarization states, which exhibit different piezoelectric activity, second harmonic generation, surface potential, and conductivity. Ferroelectric MoS2 belongs to the distorted trigonal structural 1T” phase, where a spontaneous polarization is inferred by its P3m1 space-group symmetry and corroborated by theoretical modeling. Experiments on the flipped flakes reveal that the 1T”-MoS2 samples consist of the monolayers with randomly alternating polarization orientation, which form stable but switchable “antipolar” head-to-head or tail-to-tail dipole configurations. Mechanically written domains are remarkably stable facilitating the application of 1T”-MoS2 in flexible memory and electromechanical devices.
APA, Harvard, Vancouver, ISO, and other styles
20

Portier, Marc, Cedric Decruw, Katrina Exter, Rory Meyer, Lennert Tyberghein, and Laurian Van Maldeghem. "Contemporary Data Management for Biodiversity Observation Networks leading to Linked Open Data Publishing through Distributed Techniques applying RO-Crate and GitHub Actions." Biodiversity Information Science and Standards 6 (September 9, 2022). http://dx.doi.org/10.3897/biss.6.94630.

Full text
Abstract:
Biodiversity Observation Networks (BONs) are important sources of information about the health and wealth of biodiversity in our world. By observing the presence or absence of species in different environments and by coupling with measurements of environmental parameters, they contribute important information to our knowledge of the natural world and to our ability to model and predict the effects of climate change. Knowledge gained from BONs can also be used to assist wider society (political, social, and economic bodies) in making biodiversity-sensitive decisions. For the data outputs from BONs to be useful to a wide audience, management of those data is crucial: the data should be published openly and they must be Findable, Accessible, Interoperable, and Reusable (FAIR). It is important that the data can be found and understood by anyone who could use them. For BON data to lead to new science, it is necessary that the data can be accessed programmatically, that full provenance is provided, and that data from different sources are interoperable. Semantic and technical interoperability should be coupled with user-friendly data archiving, managing, discovery, and User Interfaces (UIs) for those creating and for those using the data. The importance of lowering the hurdle to creating and sharing data in a FAIR way is not to be underestimated; the hardest part of the FAIR journey is often its uptake rather than its technology. The importance of eDNA (environmental DNA) to BONs, highlights the particular challenge to the management of BON data because of the complexity of the practical workflow (many parties involved doing the actual sampling, samples being shipped as well as biobanked, highly technical procedures involved at various stages), the complexity of the analysis of the DNA to measure “occurrences”, and the rapid evolution of the field of biotechnology. Version management to accommodate updates to biotechnology pipelines and reference databases, the use of IDs to allow data to be linked to evolving knowledge rather than to static information, semantic annotation aimed at multiple audiences (omics-expert and less-expert), and linking multiple-distributed datasets (omics, image, environmental, etc.) are all crucial aspects to FAIR management of omics data. As data standards evolve much more slowly than the scientific possibilities, it is also important to archive, annotate, and link all the data in a flexible way so it can be exported into today’s and tomorrow’s data formats. All of the above expectations are added as extra weight onto the field scientists and lab workers dealing with the actual samples and producing their digital outcomes. At the Vlaams Instituut voor de Zee (VLIZ) we are managing data from several BONs, and for this we are developing a data management approach that is based on Research Object Crate (RO-Crate) data packaging including the following elements: a GitHub entry point for holding the RO-Crates, uploaded data (co-located files or links to data held or published elsewhere), a straight-forward GitHub-Action workflow to publish and uplift the contents as linked open data, and UIs for data upload, search, and select. BON contributors and data managers will be able to upload the BON data (e.g., sampling log sheets, ENA (European Nucleotide Archive) accession codes, SOPs (Standard Operating Procedures), etc.), interfacing directly with GitHub or through our assisting tool to upload and create the RO-Crate descriptions, and to semantically describe their data. Scientists who analyse data will also be able to upload their workflow outputs and provenance metadata descriptions and link those to the BON data that their analysis is based on. By collecting the data and metadata in one place, the BON outputs can be rearranged into whatever published formats are required. The advantages of this approach are multiple: It relocates the authority on "what can and should be said about the data" back to the authors, who are not then limited by the data formats of the publishers. The application profile and required fields they themselves agree upon allow for expressing all the nuance and targeted information that they want, and thus create an overall stronger affinity between the produced content and its maintainers. Embracing the semantic web techniques marries a high-level uniform processing and indexing with a diversity of added detail that can be retrieved by drilling down. Similarly, clear cross-references to managed terms in managed vocabularies allows further serendipitous connections to apparently unconnected domains. Providing digital assistance allows automated provenance tracking, easy linking to agreed managed vocabularies, as well as hiding cumbersome low-level technical details (such as Git operations, format specifications, etc.). By doing so as early as possible in the processing of the information, it avoids lost memory details, late quality checking, and lengthy round trips that all add to the cost of publishing, interpreting and reusing the produced data. Finally, the available open semantics in the datasets themselves allow for advanced additional post-processing that is not limited to that provided by centralised aggregation services, but also allows for providing fragmentations, indexes and navigational indicators that support a more scalable, distributed and federated real-time pathway to selecting and analysing data in the context of specific research questions. It relocates the authority on "what can and should be said about the data" back to the authors, who are not then limited by the data formats of the publishers. The application profile and required fields they themselves agree upon allow for expressing all the nuance and targeted information that they want, and thus create an overall stronger affinity between the produced content and its maintainers. Embracing the semantic web techniques marries a high-level uniform processing and indexing with a diversity of added detail that can be retrieved by drilling down. Similarly, clear cross-references to managed terms in managed vocabularies allows further serendipitous connections to apparently unconnected domains. Providing digital assistance allows automated provenance tracking, easy linking to agreed managed vocabularies, as well as hiding cumbersome low-level technical details (such as Git operations, format specifications, etc.). By doing so as early as possible in the processing of the information, it avoids lost memory details, late quality checking, and lengthy round trips that all add to the cost of publishing, interpreting and reusing the produced data. Finally, the available open semantics in the datasets themselves allow for advanced additional post-processing that is not limited to that provided by centralised aggregation services, but also allows for providing fragmentations, indexes and navigational indicators that support a more scalable, distributed and federated real-time pathway to selecting and analysing data in the context of specific research questions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography