Artículos de revistas sobre el tema "Memory disaggregation"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Memory disaggregation.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Memory disaggregation".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Aguilera, Marcos K., Emmanuel Amaro, Nadav Amit, Erika Hunhoff, Anil Yelam y Gerd Zellweger. "Memory disaggregation: why now and what are the challenges". ACM SIGOPS Operating Systems Review 57, n.º 1 (26 de junio de 2023): 38–46. http://dx.doi.org/10.1145/3606557.3606563.

Texto completo
Resumen
Hardware disaggregation has emerged as one of the most fundamental shifts in how we build computer systems over the past decades. While disaggregation has been successful for several types of resources (storage, power, and others), memory disaggregation has yet to happen. We make the case that the time for memory disaggregation has arrived. We look at past successful disaggregation stories and learn that their success depended on two requirements: addressing a burning issue and being technically feasible. We examine memory disaggregation through this lens and find that both requirements are finally met. Once available, memory disaggregation will require software support to be used effectively. We discuss some of the challenges of designing an operating system that can utilize disaggregated memory for itself and its applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Mehra, Pankaj y Tom Coughlin. "Taming Memory With Disaggregation". Computer 55, n.º 9 (septiembre de 2022): 94–98. http://dx.doi.org/10.1109/mc.2022.3187847.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Wu, Chenyuan, Mohammad Javad Amiri, Jared Asch, Heena Nagda, Qizhen Zhang y Boon Thau Loo. "FlexChain". Proceedings of the VLDB Endowment 16, n.º 1 (septiembre de 2022): 23–36. http://dx.doi.org/10.14778/3561261.3561264.

Texto completo
Resumen
While permissioned blockchains enable a family of data center applications, existing systems suffer from imbalanced loads across compute and memory, exacerbating the underutilization of cloud resources. This paper presents FlexChain , a novel permissioned blockchain system that addresses this challenge by physically disaggregating CPUs, DRAM, and storage devices to process different blockchain workloads efficiently. Disaggregation allows blockchain service providers to upgrade and expand hardware resources independently to support a wide range of smart contracts with diverse CPU and memory demands. Moreover, it ensures efficient resource utilization and hence prevents resource fragmentation in a data center. We have explored the design of XOV blockchain systems in a disaggregated fashion and developed a tiered key-value store that can elastically scale its memory and storage. Our design significantly speeds up the execution stage. We have also leveraged several techniques to parallelize the validation stage in FlexChain to further improve the overall blockchain performance. Our evaluation results show that FlexChain can provide independent compute and memory scalability, while incurring at most 12.8% disaggregation overhead. FlexChain achieves almost identical throughput as the state-of-the-art distributed approaches with significantly lower memory and CPU consumption for compute-intensive and memory-intensive workloads respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Al Maruf, Hasan y Mosharaf Chowdhury. "Memory Disaggregation: Advances and Open Challenges". ACM SIGOPS Operating Systems Review 57, n.º 1 (26 de junio de 2023): 29–37. http://dx.doi.org/10.1145/3606557.3606562.

Texto completo
Resumen
Compute and memory are tightly coupled within each server in traditional datacenters. Large-scale datacenter operators have identified this coupling as a root cause behind fleetwide resource underutilization and increasing Total Cost of Ownership (TCO). With the advent of ultra-fast networks and cache-coherent interfaces, memory disaggregation has emerged as a potential solution, whereby applications can leverage available memory even outside server boundaries. This paper summarizes the growing research landscape of memory disaggregation from a software perspective and introduces the challenges toward making it practical under current and future hardware trends. We also reflect on our seven-year journey in the SymbioticLab to build a comprehensive disaggregated memory system over ultra-fast networks. We conclude with some open challenges toward building next-generation memory disaggregation systems leveraging emerging cache-coherent interconnects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Nam, Jaeyoun, Hokeun Cha, ByeongKeon Lee y Beomseok Nam. "Xpass: NUMA-aware Persistent Memory Disaggregation". Journal of KIISE 48, n.º 7 (31 de julio de 2021): 735–41. http://dx.doi.org/10.5626/jok.2021.48.7.735.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Celov, Dmitrij y Remigijus Leipus. "Time series aggregation, disaggregation and long memory". Lietuvos matematikos rinkinys 46 (21 de septiembre de 2023): 255–62. http://dx.doi.org/10.15388/lmr.2006.30723.

Texto completo
Resumen
Large-scale aggregation and its inverse, disaggregation, problems are important in many fields of studies like macroeconomics, astronomy, hydrology and sociology. It was shown in Granger (1980) that a certain aggregation of random coefficient AR(1) models can lead to long memory output. Dacunha-Castelle and Oppenheim (2001) explored the topic further, answering when and if a predefined long memory process could be obtained as the result of aggregation of a specific class of individual processes. In this paper, the disaggregation scheme of Leipus et al. (2006) is briefly discussed. Then disaggregation into AR(1) is analyzed further, resulting in a theorem that helps, under corresponding assumptions, to construct a mixture density for a given aggregated by AR(1) scheme process. Finally the theorem is illustrated by FARUMA mixture densityÆs example.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Wang, Zhonghua, Yixing Guo, Kai Lu, Jiguang Wan, Daohui Wang, Ting Yao y Huatao Wu. "Rcmp: Reconstructing RDMA-Based Memory Disaggregation via CXL". ACM Transactions on Architecture and Code Optimization 21, n.º 1 (19 de enero de 2024): 1–26. http://dx.doi.org/10.1145/3634916.

Texto completo
Resumen
Memory disaggregation is a promising architecture for modern datacenters that separates compute and memory resources into independent pools connected by ultra-fast networks, which can improve memory utilization, reduce cost, and enable elastic scaling of compute and memory resources. However, existing memory disaggregation solutions based on remote direct memory access (RDMA) suffer from high latency and additional overheads including page faults and code refactoring. Emerging cache-coherent interconnects such as CXL offer opportunities to reconstruct high-performance memory disaggregation. However, existing CXL-based approaches have physical distance limitation and cannot be deployed across racks. In this article, we propose Rcmp, a novel low-latency and highly scalable memory disaggregation system based on RDMA and CXL. The significant feature is that Rcmp improves the performance of RDMA-based systems via CXL, and leverages RDMA to overcome CXL’s distance limitation. To address the challenges of the mismatch between RDMA and CXL in terms of granularity, communication, and performance, Rcmp (1) provides a global page-based memory space management and enables fine-grained data access, (2) designs an efficient communication mechanism to avoid communication blocking issues, (3) proposes a hot-page identification and swapping strategy to reduce RDMA communications, and (4) designs an RDMA-optimized RPC framework to accelerate RDMA transfers. We implement a prototype of Rcmp and evaluate its performance by using micro-benchmarks and running a key-value store with YCSB benchmarks. The results show that Rcmp can achieve 5.2× lower latency and 3.8× higher throughput than RDMA-based systems. We also demonstrate that Rcmp can scale well with the increasing number of nodes without compromising performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Celov, D., R. Leipus y A. Philippe. "Time series aggregation, disaggregation, and long memory". Lithuanian Mathematical Journal 47, n.º 4 (octubre de 2007): 379–93. http://dx.doi.org/10.1007/s10986-007-0026-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zhang, Yingqiang, Chaoyi Ruan, Cheng Li, Xinjun Yang, Wei Cao, Feifei Li, Bo Wang et al. "Towards cost-effective and elastic cloud database deployment via memory disaggregation". Proceedings of the VLDB Endowment 14, n.º 10 (junio de 2021): 1900–1912. http://dx.doi.org/10.14778/3467861.3467877.

Texto completo
Resumen
It is challenging for cloud-native relational databases to meet the ever-increasing needs of scaling compute and memory resources independently and elastically. The recent emergence of memory disaggregation architecture, relying on high-speed RDMA network, offers opportunities to build cost-effective and elastic cloud-native databases. There exist proposals to let unmodified applications run transparently on disaggregated systems. However, running relational database kernel atop such proposals experiences notable performance degradation and time-consuming failure recovery, offsetting the benefits of disaggregation. To address these challenges, in this paper, we propose a novel database architecture called LegoBase, which explores the co-design of database kernel and memory disaggregation. It pushes the memory management back to the database layer for bypassing the Linux I/O stack and re-using or designing (remote) memory access optimizations with an understanding of data access patterns. LegoBase further splits the conventional ARIES fault tolerance protocol to independently handle the local and remote memory failures for fast recovery of compute instances. We implemented LegoBase atop MySQL. We compare LegoBase against MySQL running on a standalone machine and the state-of-the-art disaggregation proposal Infiniswap. Our evaluation shows that even with a large fraction of data placed on the remote memory, LegoBase's system performance in terms of throughput (up to 9.41% drop) and P99 latency (up to 11.58% increase) is comparable to the monolithic MySQL setup, and significantly outperforms (1.99x-2.33x, respectively) the deployment of MySQL over Infiniswap. Meanwhile, LegoBase introduces an up to 3.87x and 5.48x speedup of the recovery and warm-up time, respectively, over the monolithic MySQL and MySQL over Infiniswap, when handling failures or planned re-configurations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Wang, Ruihong, Jianguo Wang, Stratos Idreos, M. Tamer Özsu y Walid G. Aref. "The case for distributed shared-memory databases with RDMA-enabled memory disaggregation". Proceedings of the VLDB Endowment 16, n.º 1 (septiembre de 2022): 15–22. http://dx.doi.org/10.14778/3561261.3561263.

Texto completo
Resumen
Memory disaggregation (MD) allows for scalable and elastic data center design by separating compute (CPU) from memory. With MD, compute and memory are no longer coupled into the same server box. Instead, they are connected to each other via ultra-fast networking such as RDMA. MD can bring many advantages, e.g., higher memory utilization, better independent scaling (of compute and memory), and lower cost of ownership. This paper makes the case that MD can fuel the next wave of innovation on database systems. We observe that MD revives the great debate of "shared what" in the database community. We envision that distributed shared-memory databases (DSM-DB, for short) - that have not received much attention before - can be promising in the future with MD. We present a list of challenges and opportunities that can inspire next steps in system design making the case for DSM-DB.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Wang, Qing, Youyou Lu y Jiwu Shu. "Building Write-Optimized Tree Indexes on Disaggregated Memory". ACM SIGMOD Record 52, n.º 1 (7 de junio de 2023): 45–52. http://dx.doi.org/10.1145/3604437.3604448.

Texto completo
Resumen
Memory disaggregation architecture physically separates CPU and memory into independent components, which are connected via high-speed RDMA networks, greatly improving resource utilization of database systems. However, such an architecture poses unique challenges to data indexing due to limited RDMA semantics and near-zero computation power at memory side. Existing indexes supporting disaggregated memory either suffer from low write performance, or require hardware modification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Lee, Myeung-Hun y Hyeun-Jun Moon. "Nonintrusive Load Monitoring Using Recurrent Neural Networks with Occupants Location Information in Residential Buildings". Energies 16, n.º 9 (25 de abril de 2023): 3688. http://dx.doi.org/10.3390/en16093688.

Texto completo
Resumen
Nonintrusive load monitoring (NILM) is a process that disaggregates individual energy consumption based on the total energy consumption. In this study, an energy disaggregation model was developed and verified using an algorithm based on a recurrent neural network (RNN). It also aimed to evaluate the utility of the occupant location information, which is nonelectrical information. This study developed energy disaggregation models with RNN-based long short-term memory (LSTM) and gated recurrent unit (GRU). The performance of the suggested models was evaluated with a conventional method that uses the factorial hidden Markov model. As a result, when developing the GRU disaggregation model based on an RNN, the energy disaggregation performance improved in accuracy, F1-score, mean absolute error (MAE), and root mean square error (RMSE). In addition, when the location information of the occupants was used, the suggested model showed improved performance and good agreement with the real power and electricity consumption by each appliance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Dacunha-Castelle, Didier y Lisandro Fermin. "Disaggregation of Long Memory Processes on $\mathcal{C}^{\infty}$ Class". Electronic Communications in Probability 11 (2006): 35–44. http://dx.doi.org/10.1214/ecp.v11-1133.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Michelogiannakis, George, Benjamin Klenk, Brandon Cook, Min Yee Teh, Madeleine Glick, Larry Dennison, Keren Bergman y John Shalf. "A Case For Intra-rack Resource Disaggregation in HPC". ACM Transactions on Architecture and Code Optimization 19, n.º 2 (30 de junio de 2022): 1–26. http://dx.doi.org/10.1145/3514245.

Texto completo
Resumen
The expected halt of traditional technology scaling is motivating increased heterogeneity in high-performance computing (HPC) systems with the emergence of numerous specialized accelerators. As heterogeneity increases, so does the risk of underutilizing expensive hardware resources if we preserve today’s rigid node configuration and reservation strategies. This has sparked interest in resource disaggregation to enable finer-grain allocation of hardware resources to applications. However, there is currently no data-driven study of what range of disaggregation is appropriate in HPC. To that end, we perform a detailed analysis of key metrics sampled in NERSC’s Cori, a production HPC system that executes a diverse open-science HPC workload. In addition, we profile a variety of deep-learning applications to represent an emerging workload. We show that for a rack (cabinet) configuration and applications similar to Cori, a central processing unit with intra-rack disaggregation has a 99.5% probability to find all resources it requires inside its rack. In addition, ideal intra-rack resource disaggregation in Cori could reduce memory and NIC resources by 5.36% to 69.01% and still satisfy the worst-case average rack utilization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Zou, Mingzhe, Shuyang Zhu, Jiacheng Gu, Lidija M. Korunovic y Sasa Z. Djokic. "Heating and Lighting Load Disaggregation Using Frequency Components and Convolutional Bidirectional Long Short-Term Memory Method". Energies 14, n.º 16 (8 de agosto de 2021): 4831. http://dx.doi.org/10.3390/en14164831.

Texto completo
Resumen
Load disaggregation for the identification of specific load types in the total demands (e.g., demand-manageable loads, such as heating or cooling loads) is becoming increasingly important for the operation of existing and future power supply systems. This paper introduces an approach in which periodical changes in the total demands (e.g., daily, weekly, and seasonal variations) are disaggregated into corresponding frequency components and correlated with the same frequency components in the meteorological variables (e.g., temperature and solar irradiance), allowing to select combinations of frequency components with the strongest correlations as the additional explanatory variables. The paper first presents a novel Fourier series regression method for obtaining target frequency components, which is illustrated on two household-level datasets and one substation-level dataset. These results show that correlations between selected disaggregated frequency components are stronger than the correlations between the original non-disaggregated data. Afterwards, convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM) methods are used to represent dependencies among multiple dimensions and to output the estimated disaggregated time series of specific types of loads, where Bayesian optimisation is applied to select hyperparameters of CNN-BiLSTM model. The CNN-BiLSTM and other deep learning models are reported to have excellent performance in many regression problems, but they are often applied as “black box” models without further exploration or analysis of the modelled processes. Therefore, the paper compares CNN-BiLSTM model in which correlated frequency components are used as the additional explanatory variables with a naïve CNN-BiLSTM model (without frequency components). The presented case studies, related to the identification of electrical heating load and lighting load from the total demands, show that the accuracy of disaggregation improves after specific frequency components of the total demand are correlated with the corresponding frequency components of temperature and solar irradiance, i.e., that frequency component-based CNN-BiLSTM model provides a more accurate load disaggregation. Obtained results are also compared/benchmarked against the two other commonly used models, confirming the benefits of the presented load disaggregation methodology.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

ÇAVDAR, İsmail y Vahid FARYAD. "New Design of a Supervised Energy Disaggregation Model Based on the Deep Neural Network for a Smart Grid". Energies 12, n.º 7 (29 de marzo de 2019): 1217. http://dx.doi.org/10.3390/en12071217.

Texto completo
Resumen
Energy management technology of demand-side is a key process of the smart grid that helps achieve a more efficient use of generation assets by reducing the energy demand of users during peak loads. In the context of a smart grid and smart metering, this paper proposes a hybrid model of energy disaggregation through deep feature learning for non-intrusive load monitoring to classify home appliances based on the information of main meters. In addition, a deep neural model of supervised energy disaggregation with a high accuracy for giving awareness to end users and generating detailed feedback from demand-side with no need for expensive smart outlet sensors was introduced. A new functional API model of deep learning (DL) based on energy disaggregation was designed by combining a one-dimensional convolutional neural network and recurrent neural network (1D CNN-RNN). The proposed model was trained on Google Colab’s Tesla graphics processing unit (GPU) using Keras. The residential energy disaggregation dataset was used for real households and was implemented in Tensorflow backend. Three different disaggregation methods were compared, namely the convolutional neural network, 1D CNN-RNN, and long short-term memory. The results showed that energy can be disaggregated from the metrics very accurately using the proposed 1D CNN-RNN model. Finally, as a work in progress, we introduced the DL on the Edge for Fog Computing non-intrusive load monitoring (NILM) on a low-cost embedded board using a state-of-the-art inference library called uTensor that can support any Mbed enabled board with no need for the DL API of web services and internet connectivity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Mishra, Vaibhawa, Joshua L. Benjamin y Georgios Zervas. "MONet: heterogeneous Memory over Optical Network for large-scale data center resource disaggregation". Journal of Optical Communications and Networking 13, n.º 5 (2 de abril de 2021): 126. http://dx.doi.org/10.1364/jocn.419145.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Xia, Min, Wan’an Liu, Ke Wang, Wenzhu Song, Chunling Chen y Yaping Li. "Non-intrusive load disaggregation based on composite deep long short-term memory network". Expert Systems with Applications 160 (diciembre de 2020): 113669. http://dx.doi.org/10.1016/j.eswa.2020.113669.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Kianpoor, Nasrin, Bjarte Hoff y Trond Østrem. "Deep Adaptive Ensemble Filter for Non-Intrusive Residential Load Monitoring". Sensors 23, n.º 4 (10 de febrero de 2023): 1992. http://dx.doi.org/10.3390/s23041992.

Texto completo
Resumen
Identifying flexible loads, such as a heat pump, has an essential role in a home energy management system. In this study, an adaptive ensemble filtering framework integrated with long short-term memory (LSTM) is proposed for identifying flexible loads. The proposed framework, called AEFLSTM, takes advantage of filtering techniques and the representational power of LSTM for load disaggregation by filtering noise from the total power and learning the long-term dependencies of flexible loads. Furthermore, the proposed framework is adaptive and searches ensemble filtering techniques, including discrete wavelet transform, low-pass filter, and seasonality decomposition, to find the best filtering method for disaggregating different flexible loads (e.g., heat pumps). Experimental results are presented for estimating the electricity consumption of a heat pump, a refrigerator, and a dishwasher from the total power of a residential house in British Columbia (a publicly available use case). The results show that AEFLSTM can reduce the loss error (mean absolute error) by 57.4%, 44%, and 55.5% for estimating the power consumption of the heat pump, refrigerator, and dishwasher, respectively, compared to the stand-alone LSTM model. The proposed approach is used for another dataset containing measurements of an electric vehicle to further support the validity of the method. AEFLSTM is able to improve the result for disaggregating an electric vehicle by 22.5%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Maruf, Hasan Al, Yuhong Zhong, Hongyi Wang, Mosharaf Chowdhury, Asaf Cidon y Carl Waldspurger. "Memtrade: Marketplace for Disaggregated Memory Clouds". ACM SIGMETRICS Performance Evaluation Review 51, n.º 1 (26 de junio de 2023): 1–2. http://dx.doi.org/10.1145/3606376.3593553.

Texto completo
Resumen
We present Memtrade, the first practical marketplace for disaggregated memory clouds. Clouds introduce a set of unique challenges for resource disaggregation across different tenants, including resource harvesting, isolation, and matching. Memtrade allows producer virtual machines (VMs) to lease both their unallocated memory and allocated-but-idle application memory to remote consumer VMs for a limited period of time. Memtrade does not require any modifications to host-level system software or support from the cloud provider. It harvests producer memory using an application-aware control loop to form a distributed transient remote memory pool with minimal performance impact; it employs a broker to match producers with consumers while satisfying performance constraints; and it exposes the matched memory to consumers through different abstractions. As a proof of concept, we propose two such memory access interfaces for Memtrade consumers -- a transient KV cache for specified applications and a swap interface that is application-transparent. Our evaluation shows that Memtrade provides significant performance benefits for consumers (improving average read latency up to 2.8X) while preserving confidentiality and integrity, with little impact on producer applications (degrading performance by less than 2.1%).
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Maruf, Hasan Al, Yuhong Zhong, Hongyi Wang, Mosharaf Chowdhury, Asaf Cidon y Carl Waldspurger. "Memtrade: Marketplace for Disaggregated Memory Clouds". Proceedings of the ACM on Measurement and Analysis of Computing Systems 7, n.º 2 (19 de mayo de 2023): 1–27. http://dx.doi.org/10.1145/3589985.

Texto completo
Resumen
We present Memtrade, the first practical marketplace for disaggregated memory clouds. Clouds introduce a set of unique challenges for resource disaggregation across different tenants, including resource harvesting, isolation, and matching. Memtrade allows producer virtual machines (VMs) to lease both their unallocated memory and allocated-but-idle application memory to remote consumer VMs for a limited period of time. Memtrade does not require any modifications to host-level system software or support from the cloud provider. It harvests producer memory using an application-aware control loop to form a distributed transient remote memory pool with minimal performance impact; it employs a broker to match producers with consumers while satisfying performance constraints; and it exposes the matched memory to consumers through different abstractions. As a proof of concept, we propose two such memory access interfaces for Memtrade consumers -- a transient KV cache for specified applications and a swap interface that is application-transparent. Our evaluation using real-world cluster traces shows that Memtrade provides significant performance benefit for consumers (improving average read latency up to 2.8X) while preserving confidentiality and integrity, with little impact on producer applications (degrading performance by less than 2.1%).
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Rafiq, Hasan, Xiaohan Shi, Hengxu Zhang, Huimin Li y Manesh Kumar Ochani. "A Deep Recurrent Neural Network for Non-Intrusive Load Monitoring Based on Multi-Feature Input Space and Post-Processing". Energies 13, n.º 9 (2 de mayo de 2020): 2195. http://dx.doi.org/10.3390/en13092195.

Texto completo
Resumen
Non-intrusive load monitoring (NILM) is a process of estimating operational states and power consumption of individual appliances, which if implemented in real-time, can provide actionable feedback in terms of energy usage and personalized recommendations to consumers. Intelligent disaggregation algorithms such as deep neural networks can fulfill this objective if they possess high estimation accuracy and lowest generalization error. In order to achieve these two goals, this paper presents a disaggregation algorithm based on a deep recurrent neural network using multi-feature input space and post-processing. First, the mutual information method was used to select electrical parameters that had the most influence on the power consumption of each target appliance. Second, selected steady-state parameters based multi-feature input space (MFS) was used to train the 4-layered bidirectional long short-term memory (LSTM) model for each target appliance. Finally, a post-processing technique was used at the disaggregation stage to eliminate irrelevant predicted sequences, enhancing the classification and estimation accuracy of the algorithm. A comprehensive evaluation was conducted on 1-Hz sampled UKDALE and ECO datasets in a noised scenario with seen and unseen test cases. Performance evaluation showed that the MFS-LSTM algorithm is computationally efficient, scalable, and possesses better estimation accuracy in a noised scenario, and generalized to unseen loads as compared to benchmark algorithms. Presented results proved that the proposed algorithm fulfills practical application requirements and can be deployed in real-time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Quek, Yang Thee, Wai Lok Woo y Thillainathan Logenthiran. "Load Disaggregation Using One-Directional Convolutional Stacked Long Short-Term Memory Recurrent Neural Network". IEEE Systems Journal 14, n.º 1 (marzo de 2020): 1395–404. http://dx.doi.org/10.1109/jsyst.2019.2919668.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Wheater, H. S., T. J. Jolley, C. Onof, N. Mackay y R. E. Chandler. "Analysis of aggregation and disaggregation effects for grid-based hydrological models and the development of improved precipitation disaggregation procedures for GCMs". Hydrology and Earth System Sciences 3, n.º 1 (31 de marzo de 1999): 95–108. http://dx.doi.org/10.5194/hess-3-95-1999.

Texto completo
Resumen
Abstract. Appropriate representation of hydrological processes within atmospheric General Circulation Models (GCMs) is important with respect to internal model dynamics (e.g. surface feedback effects on atmospheric fluxes, continental runoff production) and to simulation of terrestrial impacts of climate change. However, at the scale of a GCM grid-square, several methodological problems arise. Spatial disaggregation of grid-square average climatological parameters is required in particular to produce appropriate point intensities from average precipitation. Conversely, aggregation of land surface heterogeneity is necessary for grid-scale or catchment scale application. The performance of grid-based hydrological models is evaluated for two large (104km2) UK catchments. Simple schemes, using sub-grid average of individual land use at 40 km scale and with no calibration, perform well at the annual time-scale and, with the addition of a (calibrated) routing component, at the daily and monthly time-scale. Decoupling of hillslope and channel routing does not necessarily improve performance or identifiability. Scale dependence is investigated through application of distribution functions for rainfall and soil moisture at 100 km scale. The results depend on climate, but show interdependence of the representation of sub-grid rainfall and soil moisture distribution. Rainfall distribution is analysed directly using radar rainfall data from the UK and the Arkansas Red River, USA. Among other properties, the scale dependence of spatial coverage upon radar pixel resolution and GCM grid-scale, as well as the serial correlation of coverages are investigated. This leads to a revised methodology for GCM application, as a simple extension of current procedures. A new location-based approach using an image processing technique is then presented, to allow for the preservation of the spatial memory of the process.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Koasidis, Konstantinos, Vangelis Marinakis, Haris Doukas, Nikolaos Doumouras, Anastasios Karamaneas y Alexandros Nikas. "Equipment- and Time-Constrained Data Acquisition Protocol for Non-Intrusive Appliance Load Monitoring". Energies 16, n.º 21 (28 de octubre de 2023): 7315. http://dx.doi.org/10.3390/en16217315.

Texto completo
Resumen
Energy behaviours will play a key role in decarbonising the building sector but require the provision of tailored insights to assist occupants to reduce their energy use. Energy disaggregation has been proposed to provide such information on the appliance level without needing a smart meter plugged in to each load. However, the use of public datasets with pre-collected data employed for energy disaggregation is associated with limitations regarding its compatibility with random households, while gathering data on the ground still requires extensive, and hitherto under-deployed, equipment and time commitments. Going beyond these two approaches, here, we propose a novel data acquisition protocol based on multiplexing appliances’ signals to create an artificial database for energy disaggregation implementations tailored to each household and dedicated to performing under conditions of time and equipment constraints, requiring that only one smart meter be used and for less than a day. In a case study of a Greek household, we train and compare four common algorithms based on the data gathered through this protocol and perform two tests: an out-of-sample test in the artificially multiplexed signal, and an external test to predict the household’s appliances’ operation based on the time series of a real total consumption signal. We find accurate monitoring of the operation and the power consumption level of high-power appliances, while in low-power appliances the operation is still found to be followed accurately but is also associated with some incorrect triggers. These insights attest to the efficacy of the protocol and its ability to produce meaningful tips for changing energy behaviours even under constraints, while in said conditions, we also find that long short-term memory neural networks consistently outperform all other algorithms, with decision trees closely following.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Markussen, Jonas, Lars Bjørlykke Kristiansen, Pål Halvorsen, Halvor Kielland-Gyrud, Håkon Kvale Stensland y Carsten Griwodz. "SmartIO". ACM Transactions on Computer Systems 38, n.º 1-2 (julio de 2021): 1–78. http://dx.doi.org/10.1145/3462545.

Texto completo
Resumen
The large variety of compute-heavy and data-driven applications accelerate the need for a distributed I/O solution that enables cost-effective scaling of resources between networked hosts. For example, in a cluster system, different machines may have various devices available at different times, but moving workloads to remote units over the network is often costly and introduces large overheads compared to accessing local resources. To facilitate I/O disaggregation and device sharing among hosts connected using Peripheral Component Interconnect Express (PCIe) non-transparent bridges, we present SmartIO. NVMes, GPUs, network adapters, or any other standard PCIe device may be borrowed and accessed directly, as if they were local to the remote machines. We provide capabilities beyond existing disaggregation solutions by combining traditional I/O with distributed shared-memory functionality, allowing devices to become part of the same global address space as cluster applications. Software is entirely removed from the data path, and simultaneous sharing of a device among application processes running on remote hosts is enabled. Our experimental results show that I/O devices can be shared with remote hosts, achieving native PCIe performance. Thus, compared to existing device distribution mechanisms, SmartIO provides more efficient, low-cost resource sharing, increasing the overall system performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Fang, Yifan, Shanshan Jiang, Shengxuan Fang, Zhenxi Gong, Min Xia y Xiaodong Zhang. "Non-Intrusive Load Disaggregation Based on a Feature Reused Long Short-Term Memory Multiple Output Network". Buildings 12, n.º 7 (19 de julio de 2022): 1048. http://dx.doi.org/10.3390/buildings12071048.

Texto completo
Resumen
Load decomposition technology is an important aspect of power intelligence. At present, there are mainly machine learning methods based on artificial features and deep learning methods for load decomposition. The method based on artificial features has a difficult time obtaining effective load features, leading to low accuracy. The method based on deep learning can automatically extract load characteristics, which improves the accuracy of load decomposition. However, with the deepening of the model structure, the number of parameters becomes too large, the training speed is slow, and the computing cost is high, which leads to the reduction of redundant features and the learning ability in some shallow networks, and the traditional deep learning model has a difficult time obtaining effective features on the time scale. To address these problems, a feature reused long short-term memory multiple output network (M-LSTM) is proposed and used for non-invasive load decomposition tasks. The network proposes an improved multiscale fusion residual module to extract basic load features and proposes the use of LSTM cyclic units to extract time series information. Feature reuse is achieved by combining it with the reorganization of the input data into multiple branches. The proposed structure reduces the difficulty of network optimization, and multi-scale fusion can obtain features on multiple time scales, which improves the ability of model feature extraction. Compared with common network models that tend to train network models for a single target load, the structure can simultaneously decompose the target load power while ensuring the accuracy of load decomposition, thus reducing computational costs, avoiding repetitive model training, and improving training efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Dinamarca, M. C., W. Cerpa, J. Garrido, J. L. Hancke y N. C. Inestrosa. "Hyperforin prevents β-amyloid neurotoxicity and spatial memory impairments by disaggregation of Alzheimer's amyloid-β-deposits". Molecular Psychiatry 11, n.º 11 (25 de julio de 2006): 1032–48. http://dx.doi.org/10.1038/sj.mp.4001866.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Vella, Jennifer L., Aleksey Molodtsov, Christina V. Angeles, Bruce R. Branchini, Mary Jo Turk y Yina H. Huang. "Dendritic cells maintain anti-tumor immunity by positioning CD8 skin-resident memory T cells". Life Science Alliance 4, n.º 10 (6 de agosto de 2021): e202101056. http://dx.doi.org/10.26508/lsa.202101056.

Texto completo
Resumen
Tissue-resident memory (TRM) T cells are emerging as critical components of the immune response to cancer; yet, requirements for their ongoing function and maintenance remain unclear. APCs promote TRM cell differentiation and re-activation but have not been implicated in sustaining TRM cell responses. Here, we identified a novel role for dendritic cells in supporting TRM to melanoma. We showed that CD8 TRM cells remain in close proximity to dendritic cells in the skin. Depletion of CD11c+ cells results in rapid disaggregation and eventual loss of melanoma-specific TRM cells. In addition, we determined that TRM migration and/or persistence requires chemotaxis and adhesion mediated by the CXCR6/CXCL16 axis. The interaction between CXCR6-expressing TRM cells and CXCL16-expressing APCs was found to be critical for sustaining TRM cell–mediated tumor protection. These findings substantially expand our knowledge of APC functions in TRM T-cell homeostasis and longevity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Alachiotis, Nikolaos, Panagiotis Skrimponis, Manolis Pissadakis y Dionisios Pnevmatikatos. "Scalable Phylogeny Reconstruction with Disaggregated Near-memory Processing". ACM Transactions on Reconfigurable Technology and Systems 15, n.º 3 (30 de septiembre de 2022): 1–32. http://dx.doi.org/10.1145/3484983.

Texto completo
Resumen
Disaggregated computer architectures eliminate resource fragmentation in next-generation datacenters by enabling virtual machines to employ resources such as CPUs, memory, and accelerators that are physically located on different servers. While this paves the way for highly compute- and/or memory-intensive applications to potentially deploy all CPUs and/or memory resources in a datacenter, it poses a major challenge to the efficient deployment of hardware accelerators: input/output data can reside on different servers than the ones hosting accelerator resources, thereby requiring time- and energy-consuming remote data transfers that diminish the gains of hardware acceleration. Targeting a disaggregated datacenter architecture similar to the IBM dReDBox disaggregated datacenter prototype, the present work explores the potential of deploying custom acceleration units adjacently to the disaggregated-memory controller on memory bricks (in dReDBox terminology), which is implemented on FPGA technology, to reduce data movement and improve performance and energy efficiency when reconstructing large phylogenies (evolutionary relationships among organisms). A fundamental computational kernel is the Phylogenetic Likelihood Function (PLF), which dominates the total execution time (up to 95%) of widely used maximum-likelihood methods. Numerous efforts to boost PLF performance over the years focused on accelerating computation; since the PLF is a data-intensive, memory-bound operation, performance remains limited by data movement, and memory disaggregation only exacerbates the problem. We describe two near-memory processing models, one that addresses the problem of workload distribution to memory bricks, which is particularly tailored toward larger genomes (e.g., plants and mammals), and one that reduces overall memory requirements through memory-side data interpolation transparently to the application, thereby allowing the phylogeny size to scale to a larger number of organisms without requiring additional memory.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Zhou, Dongguo, Yangjie Wu y Hong Zhou. "A Nonintrusive Load Monitoring Method for Microgrid EMS Using Bi-LSTM Algorithm". Complexity 2021 (22 de enero de 2021): 1–11. http://dx.doi.org/10.1155/2021/6688889.

Texto completo
Resumen
Nonintrusive load monitoring in smart microgrids aims to obtain the energy consumption of individual appliances from the aggregated energy data, which is generally confronted with the error identification of the load type for energy disaggregation in microgrid energy management system (EMS). This paper proposes a classification strategy for the nonintrusive load identification scheme based on the bilateral long-term and short-term memory network (Bi-LSTM) algorithm. The sliding window algorithm is used to extract the detected load event features and obtain the load features of data samples. In order to accurately identify these load features, the steady state information is combined as the input of the Bi-LSTM model during training. Comprising long-term and short-term memory (LSTM) network and recurrent neural network (RNN), Bi-LSTM has the advantages of stronger recognition ability. Finally, precision (P), recall (R), accuracy (A), and F1 values are used as the evaluation method for nonintrusive load identification. The experimental results show the accuracy of the Bi-LSTM identification method for load start and stop state feature matching; moreover, the method can identify relatively low-power and multistate appliances.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Wang, Yunliang, Honglei Yin, Lin Wang, Adam Shuboy, Jiyu Lou, Bing Han, Xiaoxi Zhang y Jinfeng Li. "Curcumin as a Potential Treatment for Alzheimer's Disease: A Study of the Effects of Curcumin on Hippocampal Expression of Glial Fibrillary Acidic Protein". American Journal of Chinese Medicine 41, n.º 01 (enero de 2013): 59–70. http://dx.doi.org/10.1142/s0192415x13500055.

Texto completo
Resumen
Curcumin, an agent traditionally utilized for its preventative action against tumorigenesis, oxidation, inflammation, apoptosis and hyperlipemia, has also been used in the treatment of Alzheimer's disease (AD). Recent advances in the study of AD have revealed astrocytes (AS) as being key factors in the early pathophysiological changes in AD. Glial fibrillary acidic protein (GFAP), a marker specific to AS, is markedly more manifest during morphological modifications and neural degeneration signature during the onset of AD. Several studies investigating the functionality of curcumin have shown that it not only inhibits amyloid sedimentation but also accelerates the disaggregation of amyloid plaque. Thus, we are interested in the relationship between curcumin and spatial memory in AD. In this study, we intend to investigate the effects of curcumin in amyloid-β (Aβ1-40) induced AD rat models on both the behavioral and molecular levels, that is to say, on their spatial memory and on the expression of GFAP in their hippocampi. Our results were statistically significant, showing that the spatial memory of AD rats improved following curcumin treatment (p < 0.05), and that the expression of GFAP mRNA and the number of GFAP positive cells in the curcumin treated rats was decreased relative to the AD group rats (p < 0.05). Furthermore, the expression level of GFAP mRNA in hippocampal AS in the AD rats significantly increased when compared with that in the sham control (p < 0.05). Taken together, these results suggest that curcumin improves the spatial memory disorders (such disorders being symptomatic of AD) in Aβ1-40-induced rats by down regulating GFAP expression and suppressing AS activity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Chauhan, Pallavi Singh, Dhananjay Yadav, Bhupendra Koul, Yugal Kishore Mohanta y Jun-O. Jin. "Recent Advances in Nanotechnology: A Novel Therapeutic System for the Treatment of Alzheimer’s Disease". Current Drug Metabolism 21, n.º 14 (30 de diciembre de 2020): 1144–51. http://dx.doi.org/10.2174/1389200221666201124140518.

Texto completo
Resumen
: A amyloid-β (Aβ) plaque formation in the brain is known to be the root cause of Alzheimer’s disease (AD), which affects the behavior, memory, and cognitive ability in humans. The brain starts undergoing changes several years before the actual appearance of the symptoms. Nanotechnology could prove to be an alternative strategy for treating the disease effectively. It encompasses the diagnosis as well as the therapeutic aspect using validated biomarkers and nano-based drug delivery systems, respectively. A nano-based therapy may provide an alternate strategy, wherein one targets the protofibrillar amyloid-β (Aβ) structures, and this is followed by their disaggregation as random coils. Conventional/routine drug therapies are inefficient in crossing the blood-brain barrier; however, this hurdle can be overcome with the aid of nanoparticles. The present review highlights the various challenges in the diagnosis and treatment of AD. Meticulous and collaborative research using nanotherapeutic systems could provide remarkable breakthroughs in the early-stage diagnosis and therapy of AD.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Giannoula, Christina, Kailong Huang, Jonathan Tang, Nectarios Koziris, Georgios Goumas, Zeshan Chishti y Nandita Vijaykumar. "DaeMon: Architectural Support for Efficient Data Movement in Fully Disaggregated Systems". Proceedings of the ACM on Measurement and Analysis of Computing Systems 7, n.º 1 (27 de febrero de 2023): 1–36. http://dx.doi.org/10.1145/3579445.

Texto completo
Resumen
Resource disaggregation offers a cost effective solution to resource scaling, utilization, and failure-handling in data centers by physically separating hardware devices in a server. Servers are architected as pools of processor, memory, and storage devices, organized as independent failure-isolated components interconnected by a high-bandwidth network. A critical challenge, however, is the high performance penalty of accessing data from a remote memory module over the network. Addressing this challenge is difficult as disaggregated systems have high runtime variability in network latencies/bandwidth, and page migration can significantly delay critical path cache line accesses in other pages. This paper conducts a characterization analysis on different data movement strategies in fully disaggregated systems, evaluates their performance overheads in a variety of workloads, and introduces DaeMon, the first software-transparent mechanism to significantly alleviate data movement overheads in fully disaggregated systems. First, to enable scalability to multiple hardware components in the system, we enhance each compute and memory unit with specialized engines that transparently handle data migrations. Second, to achieve high performance and provide robustness across various network, architecture and application characteristics, we implement a synergistic approach of bandwidth partitioning, link compression, decoupled data movement of multiple granularities, and adaptive granularity selection in data movements. We evaluate DaeMon in a wide variety of workloads at different network and architecture configurations using a state-of-the-art simulator. DaeMon improves system performance and data access costs by 2.39× and 3.06×, respectively, over the widely-adopted approach of moving data at page granularity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Wood, Michael, Emanuele Ogliari, Alfredo Nespoli, Travis Simpkins y Sonia Leva. "Day Ahead Electric Load Forecast: A Comprehensive LSTM-EMD Methodology and Several Diverse Case Studies". Forecasting 5, n.º 1 (2 de marzo de 2023): 297–314. http://dx.doi.org/10.3390/forecast5010016.

Texto completo
Resumen
Optimal behind-the-meter energy management often requires a day-ahead electric load forecast capable of learning non-linear and non-stationary patterns, due to the spatial disaggregation of loads and concept drift associated with time-varying physics and behavior. There are many promising machine learning techniques in the literature, but black box models lack explainability and therefore confidence in the models’ robustness can’t be achieved without thorough testing on data sets with varying and representative statistical properties. Therefore this work adopts and builds on some of the highest-performing load forecasting tools in the literature, which are Long Short-Term Memory recurrent networks, Empirical Mode Decomposition for feature engineering, and k-means clustering for outlier detection, and tests a combined methodology on seven different load data sets from six different load sectors. Forecast test set results are benchmarked against a seasonal naive model and SARIMA. The resultant skill scores range from −6.3% to 73%, indicating that the methodology adopted is often but not exclusively effective relative to the benchmarks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Fu, Yaosheng, Evgeny Bolotin, Niladrish Chatterjee, David Nellans y Stephen W. Keckler. "GPU Domain Specialization via Composable On-Package Architecture". ACM Transactions on Architecture and Code Optimization 19, n.º 1 (31 de marzo de 2022): 1–23. http://dx.doi.org/10.1145/3484505.

Texto completo
Resumen
As GPUs scale their low-precision matrix math throughput to boost deep learning (DL) performance, they upset the balance between math throughput and memory system capabilities. We demonstrate that a converged GPU design trying to address diverging architectural requirements between FP32 (or larger)-based HPC and FP16 (or smaller)-based DL workloads results in sub-optimal configurations for either of the application domains. We argue that a C omposable O n- PA ckage GPU (COPA-GPU) architecture to provide domain-specialized GPU products is the most practical solution to these diverging requirements. A COPA-GPU leverages multi-chip-module disaggregation to support maximal design reuse, along with memory system specialization per application domain. We show how a COPA-GPU enables DL-specialized products by modular augmentation of the baseline GPU architecture with up to 4× higher off-die bandwidth, 32× larger on-package cache, and 2.3× higher DRAM bandwidth and capacity, while conveniently supporting scaled-down HPC-oriented designs. This work explores the microarchitectural design necessary to enable composable GPUs and evaluates the benefits composability can provide to HPC, DL training, and DL inference. We show that when compared to a converged GPU design, a DL-optimized COPA-GPU featuring a combination of 16× larger cache capacity and 1.6× higher DRAM bandwidth scales per-GPU training and inference performance by 31% and 35%, respectively, and reduces the number of GPU instances by 50% in scale-out training scenarios.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Çavdar, İsmail Hakkı y Vahit Feryad. "Efficient Design of Energy Disaggregation Model with BERT-NILM Trained by AdaX Optimization Method for Smart Grid". Energies 14, n.º 15 (30 de julio de 2021): 4649. http://dx.doi.org/10.3390/en14154649.

Texto completo
Resumen
One of the basic conditions for the successful implementation of energy demand-side management (EDM) in smart grids is the monitoring of different loads with an electrical load monitoring system. Energy and sustainability concerns present a multitude of issues that can be addressed using approaches of data mining and machine learning. However, resolving such problems due to the lack of publicly available datasets is cumbersome. In this study, we first designed an efficient energy disaggregation (ED) model and evaluated it on the basis of publicly available benchmark data from the Residential Energy Disaggregation Dataset (REDD), and then we aimed to advance ED research in smart grids using the Turkey Electrical Appliances Dataset (TEAD) containing household electricity usage data. In addition, the TEAD was evaluated using the proposed ED model tested with benchmark REDD data. The Internet of things (IoT) architecture with sensors and Node-Red software installations were established to collect data in the research. In the context of smart metering, a nonintrusive load monitoring (NILM) model was designed to classify household appliances according to TEAD data. A highly accurate supervised ED is introduced, which was designed to raise awareness to customers and generate feedback by demand without the need for smart sensors. It is also cost-effective, maintainable, and easy to install, it does not require much space, and it can be trained to monitor multiple devices. We propose an efficient BERT-NILM tuned by new adaptive gradient descent with exponential long-term memory (Adax), using a deep learning (DL) architecture based on bidirectional encoder representations from transformers (BERT). In this paper, an improved training function was designed specifically for tuning of NILM neural networks. We adapted the Adax optimization technique to the ED field and learned the sequence-to-sequence patterns. With the updated training function, BERT-NILM outperformed state-of-the-art adaptive moment estimation (Adam) optimization across various metrics on REDD datasets; lastly, we evaluated the TEAD dataset using BERT-NILM training.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Park, Jong-Hyeok, Soyee Choi, Gihwan Oh y Sang-Won Lee. "SaS". Proceedings of the VLDB Endowment 14, n.º 9 (mayo de 2021): 1481–88. http://dx.doi.org/10.14778/3461535.3461538.

Texto completo
Resumen
Every database engine runs on top of an operating system in the host, strictly separated with the storage. This more-than-half-century-old IHDE (In-Host-Database-Engine) architecture, however, reveals its limitations when run on fast flash memory SSDs. In particular, the IO stacks incur significant run-time overhead and also hinder vertical optimizations between database engines and SSDs. In this paper, we envisage a new database architecture, called SaS (SSD as SQL database engine), where a full-blown SQL database engine runs inside SSD, tightly integrated with SSD architecture without intervening kernel stacks. As IO stacks are removed, SaS is free from their run-time overhead and further can explore numerous vertical optimizations between database engine and SSD. SaS evolves SSD from dummy block device to database server with SQL as its primary interface. The benefit of SaS will be more outstanding in the data centers where the distance between database engine and the storage is ever widening because of virtualization, storage disaggregation, and open software stacks. The advent of computational SSDs with more compute resource will enable SaS to be more viable and attractive database architecture.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Athanasiadis, Christos, Dimitrios Doukas, Theofilos Papadopoulos y Antonios Chrysopoulos. "A Scalable Real-Time Non-Intrusive Load Monitoring System for the Estimation of Household Appliance Power Consumption". Energies 14, n.º 3 (1 de febrero de 2021): 767. http://dx.doi.org/10.3390/en14030767.

Texto completo
Resumen
Smart-meter technology advancements have resulted in the generation of massive volumes of information introducing new opportunities for energy services and data-driven business models. One such service is non-intrusive load monitoring (NILM). NILM is a process to break down the electricity consumption on an appliance level by analyzing the total aggregated data measurements monitored from a single point. Most prominent existing solutions use deep learning techniques resulting in models with millions of parameters and a high computational burden. Some of these solutions use the turn-on transient response of the target appliance to calculate its energy consumption, while others require the total operation cycle. In the latter case, disaggregation is performed either with delay (in the order of minutes) or only for past events. In this paper, a real-time NILM system is proposed. The scope of the proposed NILM algorithm is to detect the turning-on of a target appliance by processing the measured active power transient response and estimate its consumption in real-time. The proposed system consists of three main blocks, i.e., an event detection algorithm, a convolutional neural network classifier and a power estimation algorithm. Experimental results reveal that the proposed system can achieve promising results in real-time, presenting high computational and memory efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Kim, Jihyun, Thi-Thu-Huong Le y Howon Kim. "Nonintrusive Load Monitoring Based on Advanced Deep Learning and Novel Signature". Computational Intelligence and Neuroscience 2017 (2017): 1–22. http://dx.doi.org/10.1155/2017/4216281.

Texto completo
Resumen
Monitoring electricity consumption in the home is an important way to help reduce energy usage. Nonintrusive Load Monitoring (NILM) is existing technique which helps us monitor electricity consumption effectively and costly. NILM is a promising approach to obtain estimates of the electrical power consumption of individual appliances from aggregate measurements of voltage and/or current in the distribution system. Among the previous studies, Hidden Markov Model (HMM) based models have been studied very much. However, increasing appliances, multistate of appliances, and similar power consumption of appliances are three big issues in NILM recently. In this paper, we address these problems through providing our contributions as follows. First, we proposed state-of-the-art energy disaggregation based on Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) model and additional advanced deep learning. Second, we proposed a novel signature to improve classification performance of the proposed model in multistate appliance case. We applied the proposed model on two datasets such as UK-DALE and REDD. Via our experimental results, we have confirmed that our model outperforms the advanced model. Thus, we show that our combination between advanced deep learning and novel signature can be a robust solution to overcome NILM’s issues and improve the performance of load identification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Lee, Chung Hyeon, Min Sung Ko, Ye Seul Kim, Ju Eon Ham, Jee Yeon Choi, Kwang Woo Hwang y So-Young Park. "Neuroprotective Effects of Davallia mariesii Roots and Its Active Constituents on Scopolamine-Induced Memory Impairment in In Vivo and In Vitro Studies". Pharmaceuticals 16, n.º 11 (14 de noviembre de 2023): 1606. http://dx.doi.org/10.3390/ph16111606.

Texto completo
Resumen
Beta-amyloid (Aβ) proteins, major contributors to Alzheimer’s disease (AD), are overproduced and accumulate as oligomers and fibrils. These protein accumulations lead to significant changes in neuronal structure and function, ultimately resulting in the neuronal cell death observed in AD. Consequently, substances that can inhibit Aβ production and/or accumulation are of great interest for AD prevention and treatment. In the course of an ongoing search for natural products, the roots of Davallia mariesii T. Moore ex Baker were selected as a promising candidate with anti-amyloidogenic effects. The ethanol extract of D. mariesii roots, along with its active constituents, not only markedly reduced Aβ production by decreasing β-secretase expression in APP–CHO cells (Chinese hamster ovary cells which stably express amyloid precursor proteins), but also exhibited the ability to diminish Aβ aggregation while enhancing the disaggregation of Aβ aggregates, as determined through the Thioflavin T (Th T) assay. Furthermore, in an in vivo study, the extract of D. mariesii roots showed potential (a tendency) for mitigating scopolamine-induced memory impairment, as evidenced by results from the Morris water maze test and the passive avoidance test, which correlated with reduced Aβ deposition. Additionally, the levels of acetylcholine were significantly elevated, and acetylcholinesterase levels significantly decreased in the brains of mice (whole brains). The treatment with the extract of D. mariesii roots also led to upregulated brain-derived neurotrophic factor (BDNF) and phospho-cAMP response element-binding protein (p-CREB) in the hippocampal region. These findings suggest that the extract of D. mariesii roots, along with its active constituents, may offer neuroprotective effects against AD. Consequently, there is potential for the development of the extract of D. mariesii roots and its active constituents as effective therapeutic or preventative agents for AD.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Rajan, Sanju y Linda Joseph. "An Adaptable Optimal Network Topology Model for Efficient Data Centre Design in Storage Area Networks". International Journal on Recent and Innovation Trends in Computing and Communication 11, n.º 2s (31 de enero de 2023): 43–50. http://dx.doi.org/10.17762/ijritcc.v11i2s.6027.

Texto completo
Resumen
In this research, we look at how different network topologies affect the energy consumption of modular data centre (DC) setups. We use a combined-input directed approach to assess the benefits of rack-scale and pod-scale fragmentation across a variety of electrical, optoelectronic, and composite network architectures in comparison to a conventional DC. When the optical transport architecture is implemented and the appropriate resource components are distributed, the findings reveal fragmentation at the layer level is adequate, even compared to a pod-scale DC. Composable DCs can operate at peak efficiency because of the optical network topology. Logical separation of conventional DC servers across an optical network architecture is also investigated in this article. When compared to physical decentralisation at the rack size, logical decomposition of data centers inside each rack offers a small decrease in the overall DC energy usage thanks to better resource needs allocation. This allows for a flexible, composable architecture that can accommodate performance based in-memory applications. Moreover, we look at the state of fundamentalmodel and its use in both static and dynamic data centres. According to our findings, typical DCs become more energy efficient when workload modularity increases, although excessive resource use still exists. By enabling optimal resource use and energy savings, disaggregation and micro-services were able to reduce the typical DC's up to 30%. Furthermore, we offer a heuristic to duplicate the Mixed integer model's output trends for energy-efficient allocation of caseloads in modularized DCs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Liu, Shu-Ying, Shuai Lu, Xiao-Lin Yu, Shi-Gao Yang, Wen Liu, Xiang-Meng Liu, Shao-Wei Wang et al. "Fruitless Wolfberry-Sprout Extract Rescued Cognitive Deficits and Attenuated Neuropathology in Alzheimer’s Disease Transgenic Mice". Current Alzheimer Research 15, n.º 9 (11 de julio de 2018): 856–68. http://dx.doi.org/10.2174/1567205015666180404160625.

Texto completo
Resumen
Background: Alzheimer’s disease (AD) is a neurodegenerative disease featured by memory loss, neuroinflammation and oxidative stress. Overproduction or insufficient clearance of Aβ leads to its pathological aggregation and deposition, which is considered the predominant neuropathological hallmark of AD. Therefore, reducing Aβ levels and inhibiting Aβ-induced neurotoxicity are feasible therapeutic strategies for AD treatment. Wolfberry has been traditionally used as a natural antioxidant and anti-aging product. However, whether wolfberry species has therapeutic potential on AD remains unknown. Method: The effects of fruitless wolfberry-sprout extract (FWE) on Aβ fibrillation and fibril disaggregation was measured by thioflavin T fluorescence and transmission electron microscope imaging; Aβ oligomer level was determined by dot-blot; Cell viability and apoptosis was assessed by MTT and TUNEL assay. The levels of Aβ40/42, oxidative stress biomarkers and inflammatory cytokines were detected by corresponding kits. 8-month-old male APP/PS1 mice and their age-matched WT littermates were treated with FWE or vehicle by oral administration (gavage) once a day for 4 weeks. Then the cognitive performance was determined using object recognition test and Y-maze test. The Aβ burden and gliosis was evaluated by immunostaining and immunoblotting, respectively. Results: FWE significantly inhibited Aβ fibrillation and disaggregated the formed Aβ fibrils, lowered Aβ oligomer level and Aβ-induced neuro-cytotoxicity, and attenuated oxidative stress in vitro. Oral administration of FWE remarkably improved cognitive function, reduced Aβ burden, decreased gliosis and inflammatory cytokines release, and ameliorated oxidative stress in the brains of APP/PS1 mice. Conclusion: These findings indicate that FWE is a promising natural agent for AD treatment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Aalipour, Mehdi, Bohumil Šťastný, Filip Horký y Bahman Jabbarian Amiri. "Scaling an Artificial Neural Network-Based Water Quality Index Model from Small to Large Catchments". Water 14, n.º 6 (15 de marzo de 2022): 920. http://dx.doi.org/10.3390/w14060920.

Texto completo
Resumen
Scaling models is one of the challenges for water resource planning and management, with the aim of bringing the developed models into practice by applying them to predict water quality and quantity for catchments that lack sufficient data. For this study, we evaluated artificial neural network (ANN) training algorithms to predict the water quality index in a source catchment. Then, multiple linear regression (MLR) models were developed, using the predicted water quality index of the ANN training algorithms and water quality variables, as dependent and independent variables, respectively. The most appropriate MLR model has been selected on the basis of the Akaike information criterion, sensitivity and uncertainty analyses. The performance of the MLR model was then evaluated by a variable aggregation and disaggregation approach, for upscaling and downscaling proposes, using the data from four very large- and three large-sized catchments and from eight medium-, three small- and seven very small-sized catchments, where they are located in the southern basin of the Caspian Sea. The performance of seven artificial neural network training algorithms, including Quick Propagation, Conjugate Gradient Descent, Quasi-Newton, Limited Memory Quasi-Newton, Levenberg–Marquardt, Online Back Propagation, and Batch Back Propagation, has been evaluated to predict the water quality index. The results show that the highest mean absolute error was observed in the WQI, as predicted by the ANN LM training algorithm; the lowest error values were for the ANN LMQN and CGD training algorithms. Our findings also indicate that for upscaling, the aggregated MLR model could provide reliable performance to predict the water quality index, since the r2 coefficient of the models varies from 0.73 ± 0.2 for large catchments, to 0.85 ± 0.15 for very large catchments, and for downscaling, the r2 coefficient of the disaggregated MLR model ranges from 0.93 ± 0.05 for very large catchments, to 0.97 ± 0.02 for medium catchments. Therefore, scaled models could be applied to catchments that lack sufficient data to perform a rapid assessment of the water quality index in the study area.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Ajdari, Mohammadamin, Patrick Raaf, Mostafa Kishani, Reza Salkhordeh, Hossein Asadi y André Brinkmann. "An Enterprise-Grade Open-Source Data Reduction Architecture for All-Flash Storage Systems". Proceedings of the ACM on Measurement and Analysis of Computing Systems 6, n.º 2 (26 de mayo de 2022): 1–27. http://dx.doi.org/10.1145/3530896.

Texto completo
Resumen
All-flash storage (AFS) systems have become an essential infrastructure component to support enterprise applications, where sub-millisecond latency and very high throughput are required. Nevertheless, the price per capacity ofsolid-state drives (SSDs) is relatively high, which has encouraged system architects to adoptdata reduction techniques, mainlydeduplication andcompression, in enterprise storage solutions. To provide higher reliability and performance, SSDs are typically grouped usingredundant array of independent disk (RAID) configurations. Data reduction on top of RAID arrays, however, adds I/O overheads and also complicates the I/O patterns redirected to the underlying backend SSDs, which invalidates the best-practice configurations used in AFS. Unfortunately, existing works on the performance of data reduction do not consider its interaction and I/O overheads with other enterprise storage components including SSD arrays and RAID controllers. In this paper, using a real setup with enterprise-grade components and based on the open-source data reduction module RedHat VDO, we reveal novel observations on the performance gap between the state-of-the-art and the optimal all-flash storage stack with integrated data reduction. We therefore explore the I/O patterns at the storage entry point and compare them with those at the disk subsystem. Our analysis shows a significant amount of I/O overheads for guaranteeing consistency and avoiding data loss through data journaling, frequent small-sized metadata updates, and duplicate content verification. We accompany these observations with cross-layer optimizations to enhance the performance of AFS, which range from deriving new optimal hardware RAID configurations up to introducing changes to the enterprise storage stack. By analyzing the characteristics of I/O types and their overheads, we propose three techniques: (a) application-aware lazy persistence, (b) a fast, read-only I/O cache for duplicate verification, and (c) disaggregation of block maps and data by offloading block maps to a very fast persistent memory device. By consolidating all proposed optimizations and implementing them in an enterprise AFS, we show 1.3× to 12.5× speedup over the baseline AFS with 90% data reduction, and from 7.8× up to 57× performance/cost improvement over an optimized AFS (with no data reduction) running applications ranging from 100% read-only to 100% write-only accesses.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Chai, Tian, Xiao-Bo Zhao, Wei-Feng Wang, Yin Qiang, Xiao-Yun Zhang y Jun-Li Yang. "Design, Synthesis of N-phenethyl Cinnamide Derivatives and Their Biological Activities for the Treatment of Alzheimer’s Disease: Antioxidant, Beta-amyloid Disaggregating and Rescue Effects on Memory Loss". Molecules 23, n.º 10 (16 de octubre de 2018): 2663. http://dx.doi.org/10.3390/molecules23102663.

Texto completo
Resumen
Gx-50 is a bioactive compound for the treatment of Alzheimer’s disease (AD) found in Sichuan pepper (Zanthoxylum bungeanum). In order to find a stronger anti-AD lead compound, 20 gx-50 (1–20) analogs have been designed and synthesized, and their molecular structures were determined based on nuclear magnetic resonance (NMR) and mass spectrometry (MS) analysis, as well as comparison with literature data. Compounds 1–20 were evaluated for their anti-AD potential by using DPPH radical scavenging assay for considering their anti-oxidant activity, thioflavin T (ThT) fluorescence assay for considering the inhibitory or disaggregate potency of Aβ, and transgenic Drosophila model assay for evaluating their rescue effect on memory loss. Finally, compound 13 was determined as a promising anti-AD candidate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Kwon, Miryeong, Junhyeok Jang, Hanjin Choi, Sangwon Lee y Myoungsoo Jung. "Failure Tolerant Training with Persistent Memory Disaggregation over CXL". IEEE Micro, 2023, 1–11. http://dx.doi.org/10.1109/mm.2023.3237548.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Jang, Junhyeok, Hanjin Choi, Hanyeoreum Bae, Seungjun Lee, Miryeong Kwon y Myoungsoo Jung. "Bridging Software-Hardware for CXL Memory Disaggregation in Billion-Scale Nearest Neighbor Search". ACM Transactions on Storage, 6 de enero de 2024. http://dx.doi.org/10.1145/3639471.

Texto completo
Resumen
We propose CXL-ANNS , a software-hardware collaborative approach to enable scalable approximate nearest neighbor search (ANNS) services. To this end, we first disaggregate DRAM from the host via compute express link (CXL) and place all essential datasets into its memory pool. While this CXL memory pool allows ANNS to handle billion-point graphs without an accuracy loss, we observe that the search performance significantly degrades because of CXL’s far-memory-like characteristics. To address this, CXL-ANNS considers the node-level relationship and caches the neighbors in local memory, which are expected to visit most frequently. For the uncached nodes, CXL-ANNS prefetches a set of nodes most likely to visit soon by understanding the graph traversing behaviors of ANNS. CXL-ANNS is also aware of the architectural structures of the CXL interconnect network and lets different hardware components collaborate with each other for the search. Further, it relaxes the execution dependency of neighbor search tasks and allows ANNS to utilize all hardware in the CXL network in parallel. Our evaluation shows that CXL-ANNS exhibits 93.3% lower query latency than state-of-the-art ANNS platforms that we tested. CXL-ANNS also outperforms an oracle ANNS system that has unlimited local DRAM capacity by 68.0%, in terms of latency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Li, Bingbing, Tongzi Wu, Shijie Bian y John W. Sutherland. "Predictive model for real-time energy disaggregation using long short-term memory". CIRP Annals, abril de 2023. http://dx.doi.org/10.1016/j.cirp.2023.04.066.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Sun, JiaXuan, JunNian Wang, WenXin Yu, ZhenHeng Wang y YangHua Wang. "Power Load Disaggregation of Households with Solar Panels Based on an Improved Long Short-term Memory Network". Journal of Electrical Engineering & Technology, 18 de agosto de 2020. http://dx.doi.org/10.1007/s42835-020-00513-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía