Journal articles on the topic 'Disaggregated Memory'

To see the other types of publications on this topic, follow the link: Disaggregated Memory.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Disaggregated Memory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Cao, Wenqi, and Ling Liu. "Hierarchical Orchestration of Disaggregated Memory." IEEE Transactions on Computers 69, no. 6 (June 1, 2020): 844–55. http://dx.doi.org/10.1109/tc.2020.2968525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Calciu, Irina, M. Talha Imran, Ivan Puddu, Sanidhya Kashyap, Hasan Al Maruf, Onur Mutlu, and Aasheesh Kolli. "Using Local Cache Coherence for Disaggregated Memory Systems." ACM SIGOPS Operating Systems Review 57, no. 1 (June 26, 2023): 21–28. http://dx.doi.org/10.1145/3606557.3606561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Disaggregated memory provides many cost savings and resource provisioning benefits for current datacenters, but software systems enabling disaggregated memory access result in high performance penalties. These systems require intrusive code changes to port applications for disaggregated memory or employ slow virtual memory mechanisms to avoid code changes. Such mechanisms result in high overhead page faults to access remote data and high dirty data amplification when tracking changes to cached data at page-granularity. In this paper, we propose a fundamentally new approach for disaggregated memory systems, based on the observation that we can use local cache coherence to track applications' memory accesses transparently, without code changes, at cache-line granularity. This simple idea (1) eliminates page faults from the application critical path when accessing remote data, and (2) decouples the application memory access tracking from the virtual memory page size, enabling cache-line granularity dirty data tracking and eviction. Using this observation, we implemented a new software runtime for disaggregated memory that improves average memory access time and reduces dirty data amplification1.
3

Maruf, Hasan Al, Yuhong Zhong, Hongyi Wang, Mosharaf Chowdhury, Asaf Cidon, and Carl Waldspurger. "Memtrade: Marketplace for Disaggregated Memory Clouds." ACM SIGMETRICS Performance Evaluation Review 51, no. 1 (June 26, 2023): 1–2. http://dx.doi.org/10.1145/3606376.3593553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We present Memtrade, the first practical marketplace for disaggregated memory clouds. Clouds introduce a set of unique challenges for resource disaggregation across different tenants, including resource harvesting, isolation, and matching. Memtrade allows producer virtual machines (VMs) to lease both their unallocated memory and allocated-but-idle application memory to remote consumer VMs for a limited period of time. Memtrade does not require any modifications to host-level system software or support from the cloud provider. It harvests producer memory using an application-aware control loop to form a distributed transient remote memory pool with minimal performance impact; it employs a broker to match producers with consumers while satisfying performance constraints; and it exposes the matched memory to consumers through different abstractions. As a proof of concept, we propose two such memory access interfaces for Memtrade consumers -- a transient KV cache for specified applications and a swap interface that is application-transparent. Our evaluation shows that Memtrade provides significant performance benefits for consumers (improving average read latency up to 2.8X) while preserving confidentiality and integrity, with little impact on producer applications (degrading performance by less than 2.1%).
4

Maruf, Hasan Al, Yuhong Zhong, Hongyi Wang, Mosharaf Chowdhury, Asaf Cidon, and Carl Waldspurger. "Memtrade: Marketplace for Disaggregated Memory Clouds." Proceedings of the ACM on Measurement and Analysis of Computing Systems 7, no. 2 (May 19, 2023): 1–27. http://dx.doi.org/10.1145/3589985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We present Memtrade, the first practical marketplace for disaggregated memory clouds. Clouds introduce a set of unique challenges for resource disaggregation across different tenants, including resource harvesting, isolation, and matching. Memtrade allows producer virtual machines (VMs) to lease both their unallocated memory and allocated-but-idle application memory to remote consumer VMs for a limited period of time. Memtrade does not require any modifications to host-level system software or support from the cloud provider. It harvests producer memory using an application-aware control loop to form a distributed transient remote memory pool with minimal performance impact; it employs a broker to match producers with consumers while satisfying performance constraints; and it exposes the matched memory to consumers through different abstractions. As a proof of concept, we propose two such memory access interfaces for Memtrade consumers -- a transient KV cache for specified applications and a swap interface that is application-transparent. Our evaluation using real-world cluster traces shows that Memtrade provides significant performance benefit for consumers (improving average read latency up to 2.8X) while preserving confidentiality and integrity, with little impact on producer applications (degrading performance by less than 2.1%).
5

Min, Xinhao, Kai Lu, Pengyu Liu, Jiguang Wan, Changsheng Xie, Daohui Wang, Ting Yao, and Huatao Wu. "SepHash: A Write-Optimized Hash Index On Disaggregated Memory via Separate Segment Structure." Proceedings of the VLDB Endowment 17, no. 5 (January 2024): 1091–104. http://dx.doi.org/10.14778/3641204.3641218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Disaggregated memory separates compute and memory resources into independent pools connected by fast RDMA (Remote Direct Memory Access) networks, which can improve memory utilization, reduce cost, and enable elastic scaling of compute and memory resources. Hash indexes provide high-performance single-point operations and are widely used in distributed systems and databases. However, under disaggregated memory, existing hash indexes suffer from write performance degradation due to high resize overhead and concurrency control overhead. Traditional write-optimized hash indexes are not efficient for disaggregated memory and sacrifice read performance. In this paper, we propose SepHash, a write-optimized hash index for disaggregated memory. First, SepHash proposes a two-level separate segment structure that significantly reduces the bandwidth consumption of resize operations. Second, SepHash employs a low-latency concurrency control strategy to eliminate unnecessary mutual exclusion and check overhead during insert operations. Finally, SepHash designs an efficient cache and filter to accelerate read operations. The evaluation results show that, compared to state-of-the-art distributed hash indexes, SepHash achieves a 3.3X higher write performance while maintaining comparable read performance.
6

Ishizaki, Teruaki, and Yoshiro Yamabe. "Memory-centric Architecture for Disaggregated Computers." NTT Technical Review 19, no. 7 (July 2021): 65–69. http://dx.doi.org/10.53829/ntr202107fa9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Koo, Bonmoo, Jaesang Hwang, Jonghyeok Park, and Wook-Hee Kim. "Converting Concurrent Range Index Structure to Range Index Structure for Disaggregated Memory." Applied Sciences 13, no. 20 (October 10, 2023): 11130. http://dx.doi.org/10.3390/app132011130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this work, we propose the Spread approach, which tailors a concurrent range index structure to a range index structure for disaggregated memory connected via RDMA (Remote Direct Memory Access). The Spread approach leverages the concept of tolerating transient inconsistencies in a concurrent range index structure to reduce the amount of expensive RDMA operations. Based on the Spread approach, we converted Blink-tree, a concurrent range index structure, to a range index structure for disaggregated memory called RF-tree. In our experimental study, RF-tree shows comparable performance to Sherman, a state-of-the-art and carefully crafted range index structure for disaggregated memory.
8

Alachiotis, Nikolaos, Panagiotis Skrimponis, Manolis Pissadakis, and Dionisios Pnevmatikatos. "Scalable Phylogeny Reconstruction with Disaggregated Near-memory Processing." ACM Transactions on Reconfigurable Technology and Systems 15, no. 3 (September 30, 2022): 1–32. http://dx.doi.org/10.1145/3484983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Disaggregated computer architectures eliminate resource fragmentation in next-generation datacenters by enabling virtual machines to employ resources such as CPUs, memory, and accelerators that are physically located on different servers. While this paves the way for highly compute- and/or memory-intensive applications to potentially deploy all CPUs and/or memory resources in a datacenter, it poses a major challenge to the efficient deployment of hardware accelerators: input/output data can reside on different servers than the ones hosting accelerator resources, thereby requiring time- and energy-consuming remote data transfers that diminish the gains of hardware acceleration. Targeting a disaggregated datacenter architecture similar to the IBM dReDBox disaggregated datacenter prototype, the present work explores the potential of deploying custom acceleration units adjacently to the disaggregated-memory controller on memory bricks (in dReDBox terminology), which is implemented on FPGA technology, to reduce data movement and improve performance and energy efficiency when reconstructing large phylogenies (evolutionary relationships among organisms). A fundamental computational kernel is the Phylogenetic Likelihood Function (PLF), which dominates the total execution time (up to 95%) of widely used maximum-likelihood methods. Numerous efforts to boost PLF performance over the years focused on accelerating computation; since the PLF is a data-intensive, memory-bound operation, performance remains limited by data movement, and memory disaggregation only exacerbates the problem. We describe two near-memory processing models, one that addresses the problem of workload distribution to memory bricks, which is particularly tailored toward larger genomes (e.g., plants and mammals), and one that reduces overall memory requirements through memory-side data interpolation transparently to the application, thereby allowing the phylogeny size to scale to a larger number of organisms without requiring additional memory.
9

Jeong, Yeonwoo, Gyeonghwan Jung, Kyuli Park, Youngjae Kim, and Sungyong Park. "Empirical Analysis of Disaggregated Cloud Memory on Memory Intensive Applications." JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE 23, no. 5 (October 31, 2023): 273–82. http://dx.doi.org/10.5573/jsts.2023.23.5.273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gonzalez, Jorge, Mauricio G. Palma, Maarten Hattink, Ruth Rubio-Noriega, Lois Orosa, Onur Mutlu, Keren Bergman, and Rodolfo Azevedo. "Optically connected memory for disaggregated data centers." Journal of Parallel and Distributed Computing 163 (May 2022): 300–312. http://dx.doi.org/10.1016/j.jpdc.2022.01.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Koh, Kwangwon, Kangho Kim, Seunghyub Jeon, and Jaehyuk Huh. "Disaggregated Cloud Memory with Elastic Block Management." IEEE Transactions on Computers 68, no. 1 (January 1, 2019): 39–52. http://dx.doi.org/10.1109/tc.2018.2851565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kwon, Youngeun, and Minsoo Rhu. "A Disaggregated Memory System for Deep Learning." IEEE Micro 39, no. 5 (September 1, 2019): 82–90. http://dx.doi.org/10.1109/mm.2019.2929165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ewais, Mohammad, and Paul Chow. "Disaggregated Memory in the Datacenter: A Survey." IEEE Access 11 (2023): 20688–712. http://dx.doi.org/10.1109/access.2023.3250407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Qing, Youyou Lu, and Jiwu Shu. "Building Write-Optimized Tree Indexes on Disaggregated Memory." ACM SIGMOD Record 52, no. 1 (June 7, 2023): 45–52. http://dx.doi.org/10.1145/3604437.3604448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Memory disaggregation architecture physically separates CPU and memory into independent components, which are connected via high-speed RDMA networks, greatly improving resource utilization of database systems. However, such an architecture poses unique challenges to data indexing due to limited RDMA semantics and near-zero computation power at memory side. Existing indexes supporting disaggregated memory either suffer from low write performance, or require hardware modification.
15

Giannoula, Christina, Kailong Huang, Jonathan Tang, Nectarios Koziris, Georgios Goumas, Zeshan Chishti, and Nandita Vijaykumar. "Architectural Support for Efficient Data Movement in Fully Disaggregated Systems." ACM SIGMETRICS Performance Evaluation Review 51, no. 1 (June 26, 2023): 5–6. http://dx.doi.org/10.1145/3606376.3593533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Traditional data centers include monolithic servers that tightly integrate CPU, memory and disk (Figure 1a). Instead, Disaggregated Systems (DSs) [8, 13, 18, 27] organize multiple compute (CC), memory (MC) and storage devices as independent, failure-isolated components interconnected over a high-bandwidth network (Figure 1b). DSs can greatly reduce data center costs by providing improved resource utilization, resource scaling, failure-handling and elasticity in modern data centers [5, 8-10, 10, 11, 13, 18, 27] The MCs provide large pools of main memory (remote memory), while the CCs include the on-chip caches and a few GBs of DRAM (local memory) that acts as a cache of remote memory. In this context, a large fraction of the application's data (~ 80%) [8, 18, 27] is located in remote memory, and can cause large performance penalties from remotely accessing data over the network.
16

Lim, Kevin, Jichuan Chang, Trevor Mudge, Parthasarathy Ranganathan, Steven K. Reinhardt, and Thomas F. Wenisch. "Disaggregated memory for expansion and sharing in blade servers." ACM SIGARCH Computer Architecture News 37, no. 3 (June 15, 2009): 267–78. http://dx.doi.org/10.1145/1555815.1555789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Miller, Ethan, Achilles Benetopoulos, George Neville-Neil, Pankaj Mehra, and Daniel Bittman. "Pointers in Far Memory." Queue 21, no. 3 (June 23, 2023): 75–93. http://dx.doi.org/10.1145/3606029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Effectively exploiting emerging far-memory technology requires consideration of operating on richly connected data outside the context of the parent process. Operating-system technology in development offers help by exposing abstractions such as memory objects and globally invariant pointers that can be traversed by devices and newly instantiated compute. Such ideas will allow applications running on future heterogeneous distributed systems with disaggregated memory nodes to exploit near-memory processing for higher performance and to independently scale their memory and compute resources for lower cost.
18

Barros, Carlos Pestana, Luis A. Gil-Alana, and James E. Payne. "U.S. Disaggregated renewable energy consumption: Persistence and long memory behavior." Energy Economics 40 (November 2013): 425–32. http://dx.doi.org/10.1016/j.eneco.2013.07.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Giannoula, Christina, Kailong Huang, Jonathan Tang, Nectarios Koziris, Georgios Goumas, Zeshan Chishti, and Nandita Vijaykumar. "DaeMon: Architectural Support for Efficient Data Movement in Fully Disaggregated Systems." Proceedings of the ACM on Measurement and Analysis of Computing Systems 7, no. 1 (February 27, 2023): 1–36. http://dx.doi.org/10.1145/3579445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Resource disaggregation offers a cost effective solution to resource scaling, utilization, and failure-handling in data centers by physically separating hardware devices in a server. Servers are architected as pools of processor, memory, and storage devices, organized as independent failure-isolated components interconnected by a high-bandwidth network. A critical challenge, however, is the high performance penalty of accessing data from a remote memory module over the network. Addressing this challenge is difficult as disaggregated systems have high runtime variability in network latencies/bandwidth, and page migration can significantly delay critical path cache line accesses in other pages. This paper conducts a characterization analysis on different data movement strategies in fully disaggregated systems, evaluates their performance overheads in a variety of workloads, and introduces DaeMon, the first software-transparent mechanism to significantly alleviate data movement overheads in fully disaggregated systems. First, to enable scalability to multiple hardware components in the system, we enhance each compute and memory unit with specialized engines that transparently handle data migrations. Second, to achieve high performance and provide robustness across various network, architecture and application characteristics, we implement a synergistic approach of bandwidth partitioning, link compression, decoupled data movement of multiple granularities, and adaptive granularity selection in data movements. We evaluate DaeMon in a wide variety of workloads at different network and architecture configurations using a state-of-the-art simulator. DaeMon improves system performance and data access costs by 2.39× and 3.06×, respectively, over the widely-adopted approach of moving data at page granularity.
20

Matsui, Chihiro, and Ken Takeuchi. "Dynamic Adjustment of Storage Class Memory Capacity in Memory-Resource Disaggregated Hybrid Storage With SCM and NAND Flash Memory." IEEE Transactions on Very Large Scale Integration (VLSI) Systems 27, no. 8 (August 2019): 1799–810. http://dx.doi.org/10.1109/tvlsi.2019.2905852.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Zou, Mingzhe, Shuyang Zhu, Jiacheng Gu, Lidija M. Korunovic, and Sasa Z. Djokic. "Heating and Lighting Load Disaggregation Using Frequency Components and Convolutional Bidirectional Long Short-Term Memory Method." Energies 14, no. 16 (August 8, 2021): 4831. http://dx.doi.org/10.3390/en14164831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Load disaggregation for the identification of specific load types in the total demands (e.g., demand-manageable loads, such as heating or cooling loads) is becoming increasingly important for the operation of existing and future power supply systems. This paper introduces an approach in which periodical changes in the total demands (e.g., daily, weekly, and seasonal variations) are disaggregated into corresponding frequency components and correlated with the same frequency components in the meteorological variables (e.g., temperature and solar irradiance), allowing to select combinations of frequency components with the strongest correlations as the additional explanatory variables. The paper first presents a novel Fourier series regression method for obtaining target frequency components, which is illustrated on two household-level datasets and one substation-level dataset. These results show that correlations between selected disaggregated frequency components are stronger than the correlations between the original non-disaggregated data. Afterwards, convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM) methods are used to represent dependencies among multiple dimensions and to output the estimated disaggregated time series of specific types of loads, where Bayesian optimisation is applied to select hyperparameters of CNN-BiLSTM model. The CNN-BiLSTM and other deep learning models are reported to have excellent performance in many regression problems, but they are often applied as “black box” models without further exploration or analysis of the modelled processes. Therefore, the paper compares CNN-BiLSTM model in which correlated frequency components are used as the additional explanatory variables with a naïve CNN-BiLSTM model (without frequency components). The presented case studies, related to the identification of electrical heating load and lighting load from the total demands, show that the accuracy of disaggregation improves after specific frequency components of the total demand are correlated with the corresponding frequency components of temperature and solar irradiance, i.e., that frequency component-based CNN-BiLSTM model provides a more accurate load disaggregation. Obtained results are also compared/benchmarked against the two other commonly used models, confirming the benefits of the presented load disaggregation methodology.
22

Lee, Sekwon, Soujanya Ponnapalli, Sharad Singhal, Marcos K. Aguilera, Kimberly Keeton, and Vijay Chidambaram. "DINOMO." Proceedings of the VLDB Endowment 15, no. 13 (September 2022): 4023–37. http://dx.doi.org/10.14778/3565838.3565854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We present Dinomo, a novel key-value store for disaggregated persistent memory (DPM). Dinomo is the first key-value store for DPM that simultaneously achieves high common-case performance, scalability, and lightweight online reconfiguration. We observe that previously proposed key-value stores for DPM had architectural limitations that prevent them from achieving all three goals simultaneously. Dinomo uses a novel combination of techniques such as ownership partitioning, disaggregated adaptive caching, selective replication, and lock-free and log-free indexing to achieve these goals. Compared to a state-of-the-art DPM key-value store, Dinomo achieves at least 3.8X better throughput at scale on various workloads and higher scalability, while providing fast reconfiguration.
23

Al Maruf, Hasan, and Mosharaf Chowdhury. "Memory Disaggregation: Advances and Open Challenges." ACM SIGOPS Operating Systems Review 57, no. 1 (June 26, 2023): 29–37. http://dx.doi.org/10.1145/3606557.3606562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Compute and memory are tightly coupled within each server in traditional datacenters. Large-scale datacenter operators have identified this coupling as a root cause behind fleetwide resource underutilization and increasing Total Cost of Ownership (TCO). With the advent of ultra-fast networks and cache-coherent interfaces, memory disaggregation has emerged as a potential solution, whereby applications can leverage available memory even outside server boundaries. This paper summarizes the growing research landscape of memory disaggregation from a software perspective and introduces the challenges toward making it practical under current and future hardware trends. We also reflect on our seven-year journey in the SymbioticLab to build a comprehensive disaggregated memory system over ultra-fast networks. We conclude with some open challenges toward building next-generation memory disaggregation systems leveraging emerging cache-coherent interconnects.
24

Aguilera, Marcos K., Emmanuel Amaro, Nadav Amit, Erika Hunhoff, Anil Yelam, and Gerd Zellweger. "Memory disaggregation: why now and what are the challenges." ACM SIGOPS Operating Systems Review 57, no. 1 (June 26, 2023): 38–46. http://dx.doi.org/10.1145/3606557.3606563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Hardware disaggregation has emerged as one of the most fundamental shifts in how we build computer systems over the past decades. While disaggregation has been successful for several types of resources (storage, power, and others), memory disaggregation has yet to happen. We make the case that the time for memory disaggregation has arrived. We look at past successful disaggregation stories and learn that their success depended on two requirements: addressing a burning issue and being technically feasible. We examine memory disaggregation through this lens and find that both requirements are finally met. Once available, memory disaggregation will require software support to be used effectively. We discuss some of the challenges of designing an operating system that can utilize disaggregated memory for itself and its applications.
25

McFall, G. Peggy, Lars Bäckman, and Roger A. Dixon. "Nuances in Alzheimer’s Genetic Risk Reveal Differential Predictions of Non-demented Memory Aging Trajectories: Selective Patterns by APOE Genotype and Sex." Current Alzheimer Research 16, no. 4 (April 24, 2019): 302–15. http://dx.doi.org/10.2174/1567205016666190315094452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background: Apolipoprotein E (APOE) is a prominent genetic risk factor for Alzheimer’s disease (AD) and a frequent target for associations with non-demented and cognitively impaired aging. APOE offers a unique opportunity to evaluate two dichotomous comparisons and selected gradations of APOE risk. Some evidence suggests that APOE effects may differ by sex and emerge especially in interaction with other AD-related biomarkers (e.g., vascular health). Methods: Longitudinal trajectories of non-demented adults (n = 632, 67% female, Mage = 68.9) populated a 40-year band of aging. Focusing on memory performance and individualized memory trajectories, a sequence of latent growth models was tested for predictions of (moderation between) APOE and pulse pressure (PP) as stratified by sex. The analyses (1) established robust benchmark PP effects on memory trajectories, (2) compared predictions of alternative dichotomous groupings (ε4- vs ε4+, ε2- vs ε2+), and (3) examined precision-based predictions by disaggregated APOE genotypes. Results: Healthier (lower) PP was associated with better memory performance and less decline. Therefore, all subsequent analyses were conducted in the interactive context of PP effects and sex stratification. The ε4-based dichotomization produced no differential genetic predictions. The ε2-based analyses showed sex differences, including selective protection for ε2-positive females. Exploratory follow-up disaggregated APOE genotype analyses suggested selective ε2 protection effects for both homozygotic and heterozygotic females. Conclusion: Precision analyses of AD genetic risk will advance the understanding of underlying mechanisms and improve personalized implementation of interventions.
26

Peters, Adaranijo, George Oikonomou, and Georgios Zervas. "In Compute/Memory Dynamic Packet/Circuit Switch Placement for Optically Disaggregated Data Centers." Journal of Optical Communications and Networking 10, no. 7 (June 29, 2018): B164. http://dx.doi.org/10.1364/jocn.10.00b164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kraska, Tim. "Technical Perspective for Sherman: A Write-Optimized Distributed B+Tree Index on Disaggregated Memory." ACM SIGMOD Record 52, no. 1 (June 7, 2023): 44. http://dx.doi.org/10.1145/3604437.3604447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Separation of compute and storage has become the defacto standard for cloud database systems. First proposed in 2007 for database systems [2], it is now widely adopted by all major cloud providers such as Amazon Redshift, Google BigQuery, and Snowflake. Separation of compute and storage adds enormous value for the customer. Users can scale storage independently of compute, which enables them to only pay for what they really uses. Consider a scenario in which data grows linearly over time, but most queries only access the last month of data, which remains relatively stable. Without the separation of compute and storage, the user would gradually be forced to significantly increase the database cluster capacity. In contrast, modern cloud database systems allow scaling the storage separately from compute; the compute cluster stays the same over time, whereas the data is stored on cheap cloud storage services, like Amazon S3.
28

Wu, Chenyuan, Mohammad Javad Amiri, Jared Asch, Heena Nagda, Qizhen Zhang, and Boon Thau Loo. "FlexChain." Proceedings of the VLDB Endowment 16, no. 1 (September 2022): 23–36. http://dx.doi.org/10.14778/3561261.3561264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
While permissioned blockchains enable a family of data center applications, existing systems suffer from imbalanced loads across compute and memory, exacerbating the underutilization of cloud resources. This paper presents FlexChain , a novel permissioned blockchain system that addresses this challenge by physically disaggregating CPUs, DRAM, and storage devices to process different blockchain workloads efficiently. Disaggregation allows blockchain service providers to upgrade and expand hardware resources independently to support a wide range of smart contracts with diverse CPU and memory demands. Moreover, it ensures efficient resource utilization and hence prevents resource fragmentation in a data center. We have explored the design of XOV blockchain systems in a disaggregated fashion and developed a tiered key-value store that can elastically scale its memory and storage. Our design significantly speeds up the execution stage. We have also leveraged several techniques to parallelize the validation stage in FlexChain to further improve the overall blockchain performance. Our evaluation results show that FlexChain can provide independent compute and memory scalability, while incurring at most 12.8% disaggregation overhead. FlexChain achieves almost identical throughput as the state-of-the-art distributed approaches with significantly lower memory and CPU consumption for compute-intensive and memory-intensive workloads respectively.
29

Zervas, Georgios, Hui Yuan, Arsalan Saljoghei, Qianqiao Chen, and Vaibhawa Mishra. "Optically Disaggregated Data Centers With Minimal Remote Memory Latency: Technologies, Architectures, and Resource Allocation [Invited]." Journal of Optical Communications and Networking 10, no. 2 (February 1, 2018): A270. http://dx.doi.org/10.1364/jocn.10.00a270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Alonso, Andrés M., Francisco J. Nogales, and Carlos Ruiz. "A Single Scalable LSTM Model for Short-Term Forecasting of Massive Electricity Time Series." Energies 13, no. 20 (October 13, 2020): 5328. http://dx.doi.org/10.3390/en13205328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Most electricity systems worldwide are deploying advanced metering infrastructures to collect relevant operational data. In particular, smart meters allow tracking electricity load consumption at a very disaggregated level and at high frequency rates. This data opens the possibility of developing new forecasting models with a potential positive impact on electricity systems. We present a general methodology that can process and forecast many smart-meter time series. Instead of using traditional and univariate approaches, we develop a single but complex recurrent neural-network model with long short-term memory that can capture individual consumption patterns and consumptions from different households. The resulting model can accurately predict future loads (short-term) of individual consumers, even if these were not included in the original training set. This entails a great potential for large-scale applications as once the single network is trained, accurate individual forecast for new consumers can be obtained at almost no computational cost. The proposed model is tested under a large set of numerical experiments by using a real-world dataset with thousands of disaggregated electricity consumption time series. Furthermore, we explore how geo-demographic segmentation of consumers may impact the forecasting accuracy of the model.
31

Apergis, Nicholas, and Chris Tsoumas. "Long memory and disaggregated energy consumption: Evidence from fossils, coal and electricity retail in the U.S." Energy Economics 34, no. 4 (July 2012): 1082–87. http://dx.doi.org/10.1016/j.eneco.2011.09.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Matindife, L., Y. Sun, and Z. Wang. "Few-Shot Learning for Image-Based Nonintrusive Appliance Signal Recognition." Computational Intelligence and Neuroscience 2022 (August 23, 2022): 1–14. http://dx.doi.org/10.1155/2022/2142935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this article, we present the recognition of nonintrusive disaggregated appliance signals through a reduced dataset computer vision deep learning approach. Deep learning data requirements are costly in terms of acquisition time, storage memory requirements, computation time, and dynamic memory usage. We develop our recognition strategy on Siamese and prototypical reduced data few-shot classification algorithms. Siamese networks address the 1-shot recognition well. Appliance activation periods vary considerably, and this can result in imbalance in the number of appliance-specific generated signal images. Prototypical networks address the problem of data imbalance in training. By first carrying out a similarity test on the entire dataset, we establish the quality of our data before input into the deep learning algorithms. The results give acceptable performance and show the promise of few-shot learning in recognizing appliances in the nonintrusive load-monitoring scheme for very limited data samples.
33

Lean, Hooi Hooi, and Russell Smyth. "Long memory in US disaggregated petroleum consumption: Evidence from univariate and multivariate LM tests for fractional integration." Energy Policy 37, no. 8 (August 2009): 3205–11. http://dx.doi.org/10.1016/j.enpol.2009.04.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ziegler, Tobias, Jacob Nelson-Slivon, Viktor Leis, and Carsten Binnig. "Design Guidelines for Correct, Efficient, and Scalable Synchronization using One-Sided RDMA." Proceedings of the ACM on Management of Data 1, no. 2 (June 13, 2023): 1–26. http://dx.doi.org/10.1145/3589276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Remote data structures built with one-sided Remote Direct Memory Access (RDMA) are at the heart of many disaggregated database management systems today. Concurrent access to these data structures by thousands of remote workers necessitates a highly efficient synchronization scheme. Remarkably, our investigation reveals that existing synchronization schemes display substantial variations in performance and scalability. Even worse, some schemes do not correctly synchronize, resulting in rare and hard-to-detect data corruption. Motivated by these observations, we conduct the first comprehensive analysis of one-sided synchronization techniques and provide general principles for correct synchronization using one-sided RDMA. Our research demonstrates that adherence to these principles not only guarantees correctness but also results in substantial performance enhancements.
35

Zhang, Yingqiang, Chaoyi Ruan, Cheng Li, Xinjun Yang, Wei Cao, Feifei Li, Bo Wang, et al. "Towards cost-effective and elastic cloud database deployment via memory disaggregation." Proceedings of the VLDB Endowment 14, no. 10 (June 2021): 1900–1912. http://dx.doi.org/10.14778/3467861.3467877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
It is challenging for cloud-native relational databases to meet the ever-increasing needs of scaling compute and memory resources independently and elastically. The recent emergence of memory disaggregation architecture, relying on high-speed RDMA network, offers opportunities to build cost-effective and elastic cloud-native databases. There exist proposals to let unmodified applications run transparently on disaggregated systems. However, running relational database kernel atop such proposals experiences notable performance degradation and time-consuming failure recovery, offsetting the benefits of disaggregation. To address these challenges, in this paper, we propose a novel database architecture called LegoBase, which explores the co-design of database kernel and memory disaggregation. It pushes the memory management back to the database layer for bypassing the Linux I/O stack and re-using or designing (remote) memory access optimizations with an understanding of data access patterns. LegoBase further splits the conventional ARIES fault tolerance protocol to independently handle the local and remote memory failures for fast recovery of compute instances. We implemented LegoBase atop MySQL. We compare LegoBase against MySQL running on a standalone machine and the state-of-the-art disaggregation proposal Infiniswap. Our evaluation shows that even with a large fraction of data placed on the remote memory, LegoBase's system performance in terms of throughput (up to 9.41% drop) and P99 latency (up to 11.58% increase) is comparable to the monolithic MySQL setup, and significantly outperforms (1.99x-2.33x, respectively) the deployment of MySQL over Infiniswap. Meanwhile, LegoBase introduces an up to 3.87x and 5.48x speedup of the recovery and warm-up time, respectively, over the monolithic MySQL and MySQL over Infiniswap, when handling failures or planned re-configurations.
36

Hertel, Katarzyna, and Agnieszka Leszczyńska. "Uporczywość inflacji i jej komponentów – badanie empiryczne dla Polski." Przegląd Statystyczny. Statistical Review 2013, no. 2 (June 30, 2013): 187–210. http://dx.doi.org/10.59139/ps.2013.02.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The paper aims to evaluate inflation persistence at a disaggregated level. The measures of inflation persistence used in the exercise rely solely on time series methods of AR and long memory models (cf: Marques, 2004; Pivetta, Reis, 2007; Baillie, 1996) and are applied to Polish CPI and its 11 components. The choice between those two frameworks has been based on the results of the FDF test (Dolado, Gonzalo, Mayoral, 2006). The second part of the study consisted in investigation of the dynamics of persistence. An experiment of rolling window regressions revealed a decrease in the persistence of most of the price indices. A plausible source of the decline was a structural change, which occurred during the introduction of direct inflation targeting by the monetary authorities in Poland.
37

Stavrakakis, Dimitrios, Dimitra Giantsidi, Maurice Bailleu, Philip Sändig, Shady Issa, and Pramod Bhatotia. "Anchor: A Library for Building Secure Persistent Memory Systems." Proceedings of the ACM on Management of Data 1, no. 4 (December 8, 2023): 1–31. http://dx.doi.org/10.1145/3626718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cloud infrastructure is experiencing a shift towards disaggregated setups, especially with the introduction of the Compute Express Link (CXL) technology, where byte-addressable ersistent memory (PM) is becoming prominent. To fully utilize the potential of such devices, it is a necessity to access them through network stacks with equivalently high levels of performance (e.g., kernel-bypass, RDMA). While, these advancements are enabling the development of high-performance data management systems, their deployment on untrusted cloud environments also increases the security threats. To this end, we present Anchor, a library for building secure PM systems. Anchor provides strong hardware-assisted security properties, while ensuring crash consistency. Anchor exposes APIs for secure data management within the realms of the established PM programming model, targeting byte-addressable storage devices. Anchor leverages trusted execution environments (TEE) and extends their security properties on PM. While TEE's protected memory region provides a strong foundation for building secure systems, the key challenge is that: TEEs are fundamentally incompatible with PM and kernel-bypass networking approaches-in particular, TEEs are neither designed to protect untrusted non-volatile PM, nor the protected region can be accessed via an untrusted DMA connection. To overcome this challenge, we design a PM engine that ensures strong security properties for the PM data, using confidential and authenticated PM data structures, while preserving crash consistency through a secure logging protocol. We further extend the PM engine to provide remote PM data operations via a secure network stack and a formally verified remote attestation protocol to form an end-to-end system. Our evaluation shows that Anchor incurs reasonable overheads, while providing strong security properties.
38

Alonso, Gustavo. "Technical perspective." ACM SIGMOD Record 51, no. 1 (May 31, 2022): 14. http://dx.doi.org/10.1145/3542700.3542704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Optimizing data movement has always been one of the key ways to get a data processing system to perform efficiently. Appearing under different disguises as computers evolved over the years, the issue is today as relevant as ever. With the advent of the cloud, data movement has become the bottleneck to address in any data processing system. In the cloud, compute and storage are typically disaggregated, with a network in between. In addition, cloud systems are scale-out, i.e., performance is obtained by parallelizing across machines, which also involves network communication. And while it is possible to use machines with large amounts of memory, the pricing models and the virtualized nature of the cloud tends to favor clusters of smaller computing nodes. Nowadays, the problem of optimizing data movement has become the problem of using the network as efficiently as possible.
39

Bairner, Alan. "For a Sociology of Sport." Sociology of Sport Journal 29, no. 1 (March 2012): 102–17. http://dx.doi.org/10.1123/ssj.29.1.102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This essay focuses on some of the main challenges that currently face the sociology of sport, the challenge from the natural sciences, the challenge from mainstream sociology and the challenge which we have set ourselves and which, requires new intellectual innovations of the type discussed in the final sections of this essay. It is vital that the sociology of sport be defended against the tyranny of the natural sciences. This project, however, must not be disaggregated from the requirements to fight for greater acceptance from mainstream sociology and to address our own shortcomings by extending the sociology of sport in potentially exciting ways. In this respect, both memory and space present interesting possibilities. They are highlighted in this essay, from among numerous possible alternatives, for largely personal reasons. The general point, however, is that if we are to defend the sociology of sport successfully, we need to be more creative, both methodologically and theoretically.
40

Amantegui, Jorge, Hugo Morais, and Lucas Pereira. "Benchmark of Electricity Consumption Forecasting Methodologies Applied to Industrial Kitchens." Buildings 12, no. 12 (December 15, 2022): 2231. http://dx.doi.org/10.3390/buildings12122231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Even though Industrial Kitchens (IKs) are among the highest energy intensity spaces, very little work has been done to forecast their consumption. This work explores the possibility of increasing the accuracy of the consumption forecast in an IK by forecasting disaggregated appliance consumption and comparing these results with the forecast of the total consumption of these appliances (Virtual Aggregate—VA). To do so, three different methods are used: the statistical method (Prophet), classic Machine Learning (ML) method such as random forest (RF), and deep learning (DL) method, namely long short-term memory (LSTM). This work uses individual appliance electricity consumption data collected from a Portuguese restaurant over a period of four consecutive weeks. The obtained results suggest that Prophet and RF are the more viable options. The former achieved the best performance in aggregated data, whereas the latter showed better forecasting results for most of the individual loads. Regarding the performance of the VA against the sum of individual appliance forecasts, all models perform better in the former. However, the very small difference across the results shows that this is a viable alternative to forecast aggregated consumption when only individual appliance consumption data are available.
41

Gupta, Phalguni, Kelly B. Collins, Deena Ratner, Simon Watkins, Gregory J. Naus, Daniel V. Landers, and Bruce K. Patterson. "Memory CD4+ T Cells Are the Earliest Detectable Human Immunodeficiency Virus Type 1 (HIV-1)-Infected Cells in the Female Genital Mucosal Tissue during HIV-1 Transmission in an Organ Culture System." Journal of Virology 76, no. 19 (October 1, 2002): 9868–76. http://dx.doi.org/10.1128/jvi.76.19.9868-9876.2002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ABSTRACT The virologic and cellular factors that are involved in transmission of human immunodeficiency virus type 1 (HIV-1) across the female genital tissue are poorly understood. We have recently developed a human cervical tissue-derived organ culture model to study heterosexual transmission of HIV-1 that mimics the in vivo situation. Using this model we investigated the role of phenotypic characteristics of HIV-1 and identified the cell types that are first infected during transmission. Our data indicate that the cell-free R5 HIV-1 was more efficiently transmitted than cell-free X4 HIV-1. Cell-free and cell-associated HIV-1 had comparable transmission efficiency regardless of whether the virus was of R5 or X4 type. We have demonstrated that memory CD4+ T cells and not Langerhans cells were the first HIV-1 RNA-positive cells detected at the epithelial-submucosal junction 6 h after virus exposure. Multicolor laser confocal microscopy demonstrated a globular distribution of HIV-1 gag-pol mRNA in the cytoplasm, and the distribution of CD4 and the CD45RO isoform was irregular on the cellular membrane. At 96 h postinoculation, in addition to memory CD4+ T cells, HIV-1 RNA-positive Langerhans cells and macrophages were also detected. The identification of CD4+ T cells in the tissue at 6 h was confirmed by flow cytometric simultaneous immunophenotyping and ultrasensitive fluorescence in situ hybridization assay on immune cells isolated from disaggregated tissue. Furthermore, PMPA {9-[2-(phosphonomethoxy)propyl] adenine}, an antiretroviral compound, and UC781, a microbicide, inhibited HIV-1 transmission across the mucosa, indicating the utility of the organ culture to screen topical microbicides for their ability to block sexual transmission of HIV-1.
42

Lee, Kim, Yeo, Seo, Kim, Lee, Hwang, and Park. "Anti-Amyloidogenic Effects of Asarone Derivatives From Perilla frutescens Leaves against Beta-Amyloid Aggregation and Nitric Oxide Production." Molecules 24, no. 23 (November 25, 2019): 4297. http://dx.doi.org/10.3390/molecules24234297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Alzheimer’s disease (AD) is a progressive, neurodegenerative brain disorder associated with loss of memory and cognitive function. Beta-amyloid (Aβ) aggregates, in particular, are known to be highly neurotoxic and lead to neurodegeneration. Therefore, blockade or reduction of Aβ aggregation is a promising therapeutic approach in AD. We have previously reported an inhibitory effect of the methanol extract of Perilla frutescens (L.) Britton (Lamiaceae) and its hexane fraction on Aβ aggregation. Here, the hexane fraction of P. frutescens was subjected to diverse column chromatography based on activity-guided isolation methodology. This approach identified five asarone derivatives including 2,3-dimethoxy-5-(1E)-1-propen-1-yl-phenol (1), β-asarone (2), 3-(2,4,5-trimethoxyphenyl)-(2E)-2-propen-1-ol (3), asaronealdehyde (4), and α-asarone (5). All five asarone derivatives efficiently reduced the aggregation of Aβ and disaggregated preformed Aβ aggregates in a dose-dependent manner as determined by a Thioflavin T (ThT) fluorescence assay. Furthermore, asarone derivatives protected PC12 cells from Aβ aggregate-induced toxicity by reducing the aggregation of Aβ, and significantly reduced NO production from LPS-stimulated BV2 microglial cells. Taken together, these results suggest that asarone derivatives derived from P. frutescens are neuroprotective and have the prophylactic and therapeutic potential in AD.
43

Cabrera, Daniel, Claudio Cubillos, Enrique Urra, and Rafael Mellado. "Framework for Incorporating Artificial Somatic Markers in the Decision-Making of Autonomous Agents." Applied Sciences 10, no. 20 (October 21, 2020): 7361. http://dx.doi.org/10.3390/app10207361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The somatic marker hypothesis proposes that when a person faces a decision scenario, many thoughts arise and different “physical consequences” are fleetingly observable. It is generally accepted that affective dimension influences cognitive capacities. Several proposals for including affectivity within artificial systems have been presented. However, to the best of our knowledge, a proposal that considers the incorporation of artificial somatic markers in a disaggregated and specialized way for the different phases that make up a decision-making process has not been observed yet. Thus, this research work proposes a framework that considers the incorporation of artificial somatic markers in different phases of the decision-making of autonomous agents: recognition of decision point; determination of the courses of action; analysis of decision options; decision selection and performing; memory management. Additionally, a unified decision-making process and a general architecture for autonomous agents are presented. This proposal offers a qualitative perspective following an approach of grounded theory, which is suggested when existing theories or models cannot fully explain or understand a phenomenon or circumstance under study. This research work represents a novel contribution to the body of knowledge in guiding the incorporation of this biological concept in artificial terms within autonomous agents.
44

Yan, Haiming, Jinyan Zhan, Bing Liu, Wei Huang, and Zhihui Li. "Spatially Explicit Assessment of Ecosystem Resilience: An Approach to Adapt to Climate Changes." Advances in Meteorology 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/798428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The ecosystem resilience plays a key role in maintaining a steady flow of ecosystem services and enables quick and flexible responses to climate changes, and maintaining or restoring the ecosystem resilience of forests is a necessary societal adaptation to climate change; however, there is a great lack of spatially explicit ecosystem resilience assessments. Drawing on principles of the ecosystem resilience highlighted in the literature, we built on the theory of dissipative structures to develop a conceptual model of the ecosystem resilience of forests. A hierarchical indicator system was designed with the influencing factors of the forest ecosystem resilience, including the stand conditions and the ecological memory, which were further disaggregated into specific indicators. Furthermore, indicator weights were determined with the analytic hierarchy process (AHP) and the coefficient of variation method. Based on the remote sensing data and forest inventory data and so forth, the resilience index of forests was calculated. The result suggests that there is significant spatial heterogeneity of the ecosystem resilience of forests, indicating it is feasible to generate large-scale ecosystem resilience maps with this assessment model, and the results can provide a scientific basis for the conservation of forests, which is of great significance to the climate change mitigation.
45

Zhong, Selena, Kristen Wroblewski, Edward Laumann, Martha McClintock, and Jayant Pinto. "ASSESSING HOW AGE, GENDER, RACE, AND EDUCATION AFFECT THE RELATIONSHIPS BETWEEN COGNITIVE DOMAINS AND OLFACTION." Innovation in Aging 7, Supplement_1 (December 1, 2023): 73. http://dx.doi.org/10.1093/geroni/igad104.0235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The associations between cognitive domains and odor identification are well-established, but how sociodemographic variables affect these relationships is less clear; using the survey-adapted Montreal Cognitive Assessment instrument (MoCA-SA), we assess how age, gender, race, and education shape these relationships. First, we used two different methods, cluster analysis and multidimensional scaling, to empirically derive distinct cognitive domains from the MoCA-SA since it is unclear whether the MoCA-SA can be disaggregated into cognitive domains. We then used ordinal logistic regression to test whether these empirically derived cognitive domains were associated with odor identification and how sociodemographic variables modified these relationships. We identified five out of the six theoretical cognitive domains, with the language domain unable to be identified. We found that odor identification was associated with episodic memory, visuospatial ability, and executive function. Stratified analyses by sociodemographic variables reveal that the associations between some of the cognitive domains and odor identification varied by age, gender, or race, but not by education. These results suggest that 1) the MoCA-SA can be used to identify cognitive domains in survey research and 2) the performance of smell tests as a screener for cognitive decline may potentially be weaker in certain subpopulations.
46

Liu, Shu-Ying, Shuai Lu, Xiao-Lin Yu, Shi-Gao Yang, Wen Liu, Xiang-Meng Liu, Shao-Wei Wang, et al. "Fruitless Wolfberry-Sprout Extract Rescued Cognitive Deficits and Attenuated Neuropathology in Alzheimer’s Disease Transgenic Mice." Current Alzheimer Research 15, no. 9 (July 11, 2018): 856–68. http://dx.doi.org/10.2174/1567205015666180404160625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background: Alzheimer’s disease (AD) is a neurodegenerative disease featured by memory loss, neuroinflammation and oxidative stress. Overproduction or insufficient clearance of Aβ leads to its pathological aggregation and deposition, which is considered the predominant neuropathological hallmark of AD. Therefore, reducing Aβ levels and inhibiting Aβ-induced neurotoxicity are feasible therapeutic strategies for AD treatment. Wolfberry has been traditionally used as a natural antioxidant and anti-aging product. However, whether wolfberry species has therapeutic potential on AD remains unknown. Method: The effects of fruitless wolfberry-sprout extract (FWE) on Aβ fibrillation and fibril disaggregation was measured by thioflavin T fluorescence and transmission electron microscope imaging; Aβ oligomer level was determined by dot-blot; Cell viability and apoptosis was assessed by MTT and TUNEL assay. The levels of Aβ40/42, oxidative stress biomarkers and inflammatory cytokines were detected by corresponding kits. 8-month-old male APP/PS1 mice and their age-matched WT littermates were treated with FWE or vehicle by oral administration (gavage) once a day for 4 weeks. Then the cognitive performance was determined using object recognition test and Y-maze test. The Aβ burden and gliosis was evaluated by immunostaining and immunoblotting, respectively. Results: FWE significantly inhibited Aβ fibrillation and disaggregated the formed Aβ fibrils, lowered Aβ oligomer level and Aβ-induced neuro-cytotoxicity, and attenuated oxidative stress in vitro. Oral administration of FWE remarkably improved cognitive function, reduced Aβ burden, decreased gliosis and inflammatory cytokines release, and ameliorated oxidative stress in the brains of APP/PS1 mice. Conclusion: These findings indicate that FWE is a promising natural agent for AD treatment.
47

Scoggins, Chris, Ciera Scott, and Lee Hyer. "The Millon Behavioral Medicine Diagnostic: Profiles of Dementia and Depression." Journal of Student Research 1, no. 1 (March 25, 2012): 60–69. http://dx.doi.org/10.47611/jsr.v1i1.69.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dementia (or cognitive decline) either results in or causes changes in personality and treatment patterns as the person declines. From a sample of older adults with memory complaints who have varying problems of dementia, depression or both, we address two issues: (1) we provide a personality, stress moderator and treatment prognostic profile of older adults with and without dementia; and (2) we consider the question of the added influence of depression related to these variables. For question 1, older subjects (N=112) were disaggregated by dementia and non-dementia status; for question 2, the older adults (N=62) were further separated into those with a dementia, those who are depressed, and those with both dementia and depression. Patients were interviewed and self-report scales were given to all subjects. All patients had a caregiver. Cognitive and personality styles, treatment and stress markers, and Axis I variables, as well as background and adjustment, were measured. For dementia/non-dementia groups, results show that the dementia group was more detached, had more problems with depression and cognitive dysfunction, and showed less concerns about Informational Fragility. Of the three groups, the combined and dementia groups had the most problems, including more fixed personality features, more psychiatric problems, more stress moderators and more problematic treatment prognostics. We also show profiles of treatment prognostics and stress moderators of each personality type for a dementia, depression and dementia/depression. We highlight the importance of depression at later life whether with or without a dementia.
48

ÇAVDAR, İsmail, and Vahid FARYAD. "New Design of a Supervised Energy Disaggregation Model Based on the Deep Neural Network for a Smart Grid." Energies 12, no. 7 (March 29, 2019): 1217. http://dx.doi.org/10.3390/en12071217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Energy management technology of demand-side is a key process of the smart grid that helps achieve a more efficient use of generation assets by reducing the energy demand of users during peak loads. In the context of a smart grid and smart metering, this paper proposes a hybrid model of energy disaggregation through deep feature learning for non-intrusive load monitoring to classify home appliances based on the information of main meters. In addition, a deep neural model of supervised energy disaggregation with a high accuracy for giving awareness to end users and generating detailed feedback from demand-side with no need for expensive smart outlet sensors was introduced. A new functional API model of deep learning (DL) based on energy disaggregation was designed by combining a one-dimensional convolutional neural network and recurrent neural network (1D CNN-RNN). The proposed model was trained on Google Colab’s Tesla graphics processing unit (GPU) using Keras. The residential energy disaggregation dataset was used for real households and was implemented in Tensorflow backend. Three different disaggregation methods were compared, namely the convolutional neural network, 1D CNN-RNN, and long short-term memory. The results showed that energy can be disaggregated from the metrics very accurately using the proposed 1D CNN-RNN model. Finally, as a work in progress, we introduced the DL on the Edge for Fog Computing non-intrusive load monitoring (NILM) on a low-cost embedded board using a state-of-the-art inference library called uTensor that can support any Mbed enabled board with no need for the DL API of web services and internet connectivity.
49

Bhavnani, Supriya, Georgia Lockwood Estrin, Rianne Haartsen, Sarah K. G. Jensen, Teodora Gliga, Vikram Patel, and Mark H. Johnson. "EEG signatures of cognitive and social development of preschool children–a systematic review." PLOS ONE 16, no. 2 (February 19, 2021): e0247223. http://dx.doi.org/10.1371/journal.pone.0247223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background Early identification of preschool children who are at risk of faltering in their development is essential to ensuring that all children attain their full potential. Electroencephalography (EEG) has been used to measure neural correlates of cognitive and social development in children for decades. Effective portable and low-cost EEG devices increase the potential of its use to assess neurodevelopment in children at scale and particularly in low-resource settings. We conducted a systematic review aimed to synthesise EEG measures of cognitive and social development in 2-5-year old children. Our secondary aim was to identify how these measures differ across a) the course of development within this age range, b) gender and c) socioeconomic status (SES). Methods and findings A systematic literature search identified 51 studies for inclusion in this review. Data relevant to the primary and secondary aims was extracted from these studies and an assessment for risk of bias was done, which highlighted the need for harmonisation of EEG data collection and analysis methods across research groups and more detailed reporting of participant characteristics. Studies reported on the domains of executive function (n = 22 papers), selective auditory attention (n = 9), learning and memory (n = 5), processing of faces (n = 7) and emotional stimuli (n = 8). For papers investigating executive function and selective auditory attention, the most commonly reported measures were alpha power and the amplitude and latency of positive (P1, P2, P3) and negative (N1, N2) deflections of event related potential (ERPs) components. The N170 and P1 ERP components were the most commonly reported neural responses to face and emotional faces stimuli. A mid-latency negative component and positive slow wave were used to index learning and memory, and late positive potential in response to emotional non-face stimuli. While almost half the studies described changes in EEG measures across age, only eight studies disaggregated results based on gender, and six included children from low income households to assess the impact of SES on neurodevelopment. No studies were conducted in low- and middle-income countries. Conclusion This review has identified power across the EEG spectrum and ERP components to be the measures most commonly reported in studies in which preschool children engage in tasks indexing cognitive and social development. It has also highlighted the need for additional research into their changes across age and based on gender and SES.
50

Aalipour, Mehdi, Bohumil Šťastný, Filip Horký, and Bahman Jabbarian Amiri. "Scaling an Artificial Neural Network-Based Water Quality Index Model from Small to Large Catchments." Water 14, no. 6 (March 15, 2022): 920. http://dx.doi.org/10.3390/w14060920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Scaling models is one of the challenges for water resource planning and management, with the aim of bringing the developed models into practice by applying them to predict water quality and quantity for catchments that lack sufficient data. For this study, we evaluated artificial neural network (ANN) training algorithms to predict the water quality index in a source catchment. Then, multiple linear regression (MLR) models were developed, using the predicted water quality index of the ANN training algorithms and water quality variables, as dependent and independent variables, respectively. The most appropriate MLR model has been selected on the basis of the Akaike information criterion, sensitivity and uncertainty analyses. The performance of the MLR model was then evaluated by a variable aggregation and disaggregation approach, for upscaling and downscaling proposes, using the data from four very large- and three large-sized catchments and from eight medium-, three small- and seven very small-sized catchments, where they are located in the southern basin of the Caspian Sea. The performance of seven artificial neural network training algorithms, including Quick Propagation, Conjugate Gradient Descent, Quasi-Newton, Limited Memory Quasi-Newton, Levenberg–Marquardt, Online Back Propagation, and Batch Back Propagation, has been evaluated to predict the water quality index. The results show that the highest mean absolute error was observed in the WQI, as predicted by the ANN LM training algorithm; the lowest error values were for the ANN LMQN and CGD training algorithms. Our findings also indicate that for upscaling, the aggregated MLR model could provide reliable performance to predict the water quality index, since the r2 coefficient of the models varies from 0.73 ± 0.2 for large catchments, to 0.85 ± 0.15 for very large catchments, and for downscaling, the r2 coefficient of the disaggregated MLR model ranges from 0.93 ± 0.05 for very large catchments, to 0.97 ± 0.02 for medium catchments. Therefore, scaled models could be applied to catchments that lack sufficient data to perform a rapid assessment of the water quality index in the study area.

To the bibliography