Journal articles on the topic 'Memory Devices - Classification'

To see the other types of publications on this topic, follow the link: Memory Devices - Classification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Memory Devices - Classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bezerra, Vitor Hugo, Victor Guilherme Turrisi da Costa, Sylvio Barbon Junior, Rodrigo Sanches Miani, and Bruno Bogaz Zarpelão. "IoTDS: A One-Class Classification Approach to Detect Botnets in Internet of Things Devices." Sensors 19, no. 14 (July 19, 2019): 3188. http://dx.doi.org/10.3390/s19143188.

Full text
Abstract:
Internet of Things (IoT) devices have become increasingly widespread. Despite their potential of improving multiple application domains, these devices have poor security, which can be explored by attackers to build large-scale botnets. In this work, we propose a host-based approach to detect botnets in IoT devices, named IoTDS (Internet of Things Detection System). It relies on one-class classifiers, which model only the legitimate device behaviour for further detection of deviations, avoiding the manual labelling process. The proposed solution is underpinned by a novel agent-manager architecture based on HTTPS, which prevents the IoT device from being overloaded by the training activities. To analyse the device’s behaviour, the approach extracts features from the device’s CPU utilisation and temperature, memory consumption, and number of running tasks, meaning that it does not make use of network traffic data. To test our approach, we used an experimental IoT setup containing a device compromised by bot malware. Multiple scenarios were made, including three different IoT device profiles and seven botnets. Four one-class algorithms (Elliptic Envelope, Isolation Forest, Local Outlier Factor, and One-class Support Vector Machine) were evaluated. The results show the proposed system has a good predictive performance for different botnets, achieving a mean F1-score of 94% for the best performing algorithm, the Local Outlier Factor. The system also presented a low impact on the device’s energy consumption, and CPU and memory utilisation.
APA, Harvard, Vancouver, ISO, and other styles
2

Hwang, Yeongjin, Jeong Hoon Jeon, Juhyun Lee, Jonghyuk Yoon, Felix Sunjoo Kim, and Hyungjin Kim. "Effect of Threshold Voltage Window and Variation of Organic Synaptic Transistor for Neuromorphic System." Journal of Nanoscience and Nanotechnology 21, no. 8 (August 1, 2021): 4303–9. http://dx.doi.org/10.1166/jnn.2021.19393.

Full text
Abstract:
Synaptic devices, which are considered as one of the most important components of neuromorphic system, require a memory effect to store weight values, a high integrity for compact system, and a wide window to guarantee an accurate programming between each weight level. In this regard, memristive devices such as resistive random access memory (RRAM) and phase change memory (PCM) have been intensely studied; however, these devices have quite high current-level despite their state, which would be an issue if a deep and massive neural network is implemented with these devices since a large amount of current-sum needs to flow through a single electrode line. Organic transistor is one of the potential candidates as synaptic device owing to flexibility and a low current drivability for low power consumption during inference. In this paper, we investigate the performance and power consumption of neuromorphic system composed of organic synaptic transistors conducting a pattern recognition simulation with MNIST handwritten digit data set. It is analyzed according to threshold voltage (VT) window, device variation, and the number of available states. The classification accuracy is not affected by VT window if the device variation is not considered, but the current sum ratio between answer node and the rest 9 nodes varies. In contrast, the accuracy is significantly degraded as increasing the device variation; however, the classification rate is less affected when the number of device states is fewer.
APA, Harvard, Vancouver, ISO, and other styles
3

K I, Ravikumar. "Memristor-Based Deep Learning Classification Model for Object Detection." ECS Transactions 107, no. 1 (April 24, 2022): 277–85. http://dx.doi.org/10.1149/10701.0277ecst.

Full text
Abstract:
The memristor-based neural network takes full use of the benefits of memristive devices, such as their low power consumption, high integration density, and great network recognition capacity, and therefore can be efficiently used for binary image classification in AI based applications. Before implementing the memristor-based memory at circuit level, the performance need to be analyzed. In this work, a nine-layer neuromorphic network is designed and is used to classify binary images. Using the MNIST dataset, the performance of architecture is validated and the impact of device yield and resistance fluctuations under various neuron configurations on network performance are studied. The implementation of restive random access memory (memristor based memory) is carried out using the memtorch in python 3.7 scripting language and the simulation was carried out using MemTorch, an open-source framework for customized large-scale memristive DL applications. The findings indicate that the nine-layer network has an accuracy of about 98 percent in digit recognition.
APA, Harvard, Vancouver, ISO, and other styles
4

Pérez Arteaga, Sandra, Ana Lucila Sandoval Orozco, and Luis Javier García Villalba. "Analysis of Machine Learning Techniques for Information Classification in Mobile Applications." Applied Sciences 13, no. 9 (April 27, 2023): 5438. http://dx.doi.org/10.3390/app13095438.

Full text
Abstract:
Due to the daily use of mobile technologies, we live in constant connection with the world through the Internet. Technological innovations in smart devices have allowed us to carry out everyday activities such as communicating, working, studying or using them as a means of entertainment, which has led to smartphones displacing computers as the most important device connected to the Internet today, causing users to demand smarter applications or functionalities that allow them to meet their needs. Artificial intelligence has been a major innovation in information technology that is transforming the way users use smart devices. Using applications that make use of artificial intelligence has revolutionised our lives, from making predictions of possible words based on typing in a text box, to being able to unlock devices through pattern recognition. However, these technologies face problems such as overheating and battery drain due to high resource consumption, low computational capacity, memory limitations, etc. This paper reviews the most important artificial intelligence algorithms for mobile devices, emphasising the challenges and problems that can arise when implementing these technologies in low-resource devices.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Peng, Zhihua Xiao, Xianglong Wang, Lei Chen, Chao Wang, and Fengwei An. "A Multi-Core Object Detection Coprocessor for Multi-Scale/Type Classification Applicable to IoT Devices." Sensors 20, no. 21 (October 31, 2020): 6239. http://dx.doi.org/10.3390/s20216239.

Full text
Abstract:
Power efficiency is becoming a critical aspect of IoT devices. In this paper, we present a compact object-detection coprocessor with multiple cores for multi-scale/type classification. This coprocessor is capable to process scalable block size for multi-shape detection-window and can be compatible with the frame-image sizes up to 2048 × 2048 for multi-scale classification. A memory-reuse strategy that requires only one dual-port SRAM for storing the feature-vector of one-row blocks is developed to save memory usage. Eventually, a prototype platform is implemented on the Intel DE4 development board with the Stratix IV device. The power consumption of each core in FPGA is only 80.98 mW.
APA, Harvard, Vancouver, ISO, and other styles
6

Yauri, Ricardo, and Rafael Espino. "Edge device for movement pattern classification using neural network algorithms." Indonesian Journal of Electrical Engineering and Computer Science 30, no. 1 (April 1, 2023): 229. http://dx.doi.org/10.11591/ijeecs.v30.i1.pp229-236.

Full text
Abstract:
Portable electronic systems allow the analysis and monitoring of continuous time signals, such as human activity, integrating deep learning techniques with cloud computing, causing network traffic and high energy consumption. In addition, the use of algorithms based on neural networks are a very widespread solution in these applications, but they have a high computational cost, not suitable for edge devices. In this context, solutions are created that bring data analysis closer to the edge of the network, so in this paper models adapted to an edge device for the recognition of human activity are evaluated, considering characteristics such as inference time, memory, and precision. Two categories of models based on deep and convolutional neural networks are developed by implementing them in C language and comparing with the TensorFlow Lite platform. The results show that the implementations with libraries have a better accuracy result of 76% using principal component analysis inputs, obtaining an execution time of 9ms. Therefore, when evaluating the models, we must not only consider their accuracy but also the execution time and memory on the device.
APA, Harvard, Vancouver, ISO, and other styles
7

Singh Yadav, Ram, Aniket Sadashiva, Amod Holla, Pranaba Kishor Muduli, and Debanjan Bhowmik. "Impact of edge defects on the synaptic characteristic of a ferromagnetic domain-wall device and on on-chip learning." Neuromorphic Computing and Engineering 3, no. 3 (August 25, 2023): 034006. http://dx.doi.org/10.1088/2634-4386/acf0e4.

Full text
Abstract:
Abstract Topological-soliton-based devices, like the ferromagnetic domain-wall device, have been proposed as non-volatile memory (NVM) synapses in electronic crossbar arrays for fast and energy-efficient implementation of on-chip learning of neural networks (NN). High linearity and symmetry in the synaptic weight-update characteristic of the device (long-term potentiation (LTP) and long-term depression (LTD)) are important requirements to obtain high classification/regression accuracy in such an on-chip learning scheme. However, obtaining such linear and symmetric LTP and LTD characteristics in the ferromagnetic domain-wall device has remained a challenge. Here, we first carry out micromagnetic simulations of the device to show that the incorporation of defects at the edges of the device, with the defects having higher perpendicular magnetic anisotropy compared to the rest of the ferromagnetic layer, leads to massive improvement in the linearity and symmetry of the LTP and LTD characteristics of the device. This is because these defects act as pinning centres for the domain wall and prevent it from moving during the delay time between two consecutive programming current pulses, which is not the case when the device does not have defects. Next, we carry out system-level simulations of two crossbar arrays with synaptic characteristics of domain-wall synapse devices incorporated in them: one without such defects, and one with such defects. For on-chip learning of both long short-term memory networks (using a regression task) and fully connected NN (using a classification task), we show improved performance when the domain-wall synapse devices have defects at the edges. We also estimate the energy consumption in these synaptic devices and project their scaling, with respect to on-chip learning in corresponding crossbar arrays.
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Hyungkeuk, NamKyung Lee, and Sungjin Lee. "A Method of Deep Learning Model Optimization for Image Classification on Edge Device." Sensors 22, no. 19 (September 27, 2022): 7344. http://dx.doi.org/10.3390/s22197344.

Full text
Abstract:
Due to the recent increasing utilization of deep learning models on edge devices, the industry demand for Deep Learning Model Optimization (DLMO) is also increasing. This paper derives a usage strategy of DLMO based on the performance evaluation through light convolution, quantization, pruning techniques and knowledge distillation, known to be excellent in reducing memory size and operation delay with a minimal accuracy drop. Through experiments regarding image classification, we derive possible and optimal strategies to apply deep learning into Internet of Things (IoT) or tiny embedded devices. In particular, strategies for DLMO technology most suitable for each on-device Artificial Intelligence (AI) service are proposed in terms of performance factors. In this paper, we suggest a possible solution of the most rational algorithm under very limited resource environments by utilizing mature deep learning methodologies.
APA, Harvard, Vancouver, ISO, and other styles
9

Kwon, Dongseok, Hyeongsu Kim, Kyu-Ho Lee, Joon Hwang, Wonjun Shin, Jong-Ho Bae, Sung Yun Woo, and Jong-Ho Lee. "Super-steep synapses based on positive feedback devices for reliable binary neural networks." Applied Physics Letters 122, no. 10 (March 6, 2023): 102101. http://dx.doi.org/10.1063/5.0131235.

Full text
Abstract:
This work proposes positive feedback (PF) device-based synaptic devices for reliable binary neural networks (BNNs). Due to PF operation, the fabricated PF device shows a high on/off current ratio (2.69 [Formula: see text] 107). The PF device has a charge-trap layer by which the turn-on voltage ( Von) of the device can be adjusted by program/erase operations and a long-term memory function is implemented. Also, due to the steep switching characteristics of the PF device, the conductance becomes tolerant to the retention time and the variation in turn-on voltage. Simulations show that high accuracy (88.44% for CIFAR-10 image classification) can be achieved in hardware-based BNNs using PF devices with these properties as synapses.
APA, Harvard, Vancouver, ISO, and other styles
10

Qian, Xuwei, Renlong Hang, and Qingshan Liu. "ReX: An Efficient Approach to Reducing Memory Cost in Image Classification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 2099–107. http://dx.doi.org/10.1609/aaai.v36i2.20106.

Full text
Abstract:
Exiting simple samples in adaptive multi-exit networks through early modules is an effective way to achieve high computational efficiency. One can observe that deployments of multi-exit architectures on resource-constrained devices are easily limited by high memory footprint of early modules. In this paper, we propose a novel approach named recurrent aggregation operator (ReX), which uses recurrent neural networks (RNNs) to effectively aggregate intra-patch features within a large receptive field to get delicate local representations, while bypassing large early activations. The resulting model, named ReXNet, can be easily extended to dynamic inference by introducing a novel consistency-based early exit criteria, which is based on the consistency of classification decisions over several modules, rather than the entropy of the prediction distribution. Extensive experiments on two benchmark datasets, i.e., Visual Wake Words, ImageNet-1k, demonstrate that our method consistently reduces the peak RAM and average latency of a wide variety of adaptive models on low-power devices.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, A. Xiaochang, B. Jichen Chen, C. Zhengjun Zhai, D. Mingchen Feng, and E. Xin Ye. "URM: A Unified RAM Management Scheme for NAND Flash Storage Devices." Discrete Dynamics in Nature and Society 2022 (May 31, 2022): 1–11. http://dx.doi.org/10.1155/2022/3376904.

Full text
Abstract:
In NAND flash storage devices, the random access memory (RAM) is composed of a data buffer and mapping cache that play critical roles in storage performance. Furthermore, as the capacity growth rate of RAM chips lags far behind that of flash memory chips, determining how to take advantage of precious RAM is still a crucial issue. However, most existing buffer management studies on storage devices report performance degradation since these devices cannot refine reference regularities such as sequential, hot, or looping data patterns. In addition, most of these studies focus only on separately managing the data buffer or mapping cache. Compared with the existing buffer/cache management schemes (BMSs), we propose a unified RAM management (URM) scheme for not only the mapping cache but also the data buffer in NAND flash storage devices. URM compresses the mapping table to save memory space, and the remaining dynamic RAM space is used for the data buffer. For the data buffer part, we utilize the program counter-technique in the host layer that provides automatic pattern recognition for different applications, in contrast to existing BMSs. The program counter-technique in our design is able to distinguish four patterns. According to these patterns, the data buffer is divided into four size-adjustable zones. Therefore, our approach is linked to multimodal data and used in a data-intensive system. In particular, in URM, we use a multivariate classification to predict prefetching length in mapping buffer management. Our multivariate classification is transformed into multiple binary classifications (logistic regressions). Finally, we extensively evaluate URM using various realistic workloads, and the experimental results show that, compared with three data buffer management schemes, CFLRU, BPLRU, and VBBMS, URM can improve the hit ratio of data buffer and save response time by an average to 32% and 18%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
12

Wan, Weier, Rajkumar Kubendran, Clemens Schaefer, Sukru Burc Eryilmaz, Wenqiang Zhang, Dabin Wu, Stephen Deiss, et al. "A compute-in-memory chip based on resistive random-access memory." Nature 608, no. 7923 (August 17, 2022): 504–12. http://dx.doi.org/10.1038/s41586-022-04992-8.

Full text
Abstract:
AbstractRealizing increasingly complex artificial intelligence (AI) functionalities directly on edge devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory (CIM) based on resistive random-access memory (RRAM)1 promises to meet such demand by storing AI model weights in dense, analogue and non-volatile RRAM devices, and by performing AI computation directly within RRAM, thus eliminating power-hungry data movement between separate compute and memory2–5. Although recent studies have demonstrated in-memory matrix-vector multiplication on fully integrated RRAM-CIM hardware6–17, it remains a goal for a RRAM-CIM chip to simultaneously deliver high energy efficiency, versatility to support diverse models and software-comparable accuracy. Although efficiency, versatility and accuracy are all indispensable for broad adoption of the technology, the inter-related trade-offs among them cannot be addressed by isolated improvements on any single abstraction level of the design. Here, by co-optimizing across all hierarchies of the design from algorithms and architecture to circuits and devices, we present NeuRRAM—a RRAM-based CIM chip that simultaneously delivers versatility in reconfiguring CIM cores for diverse model architectures, energy efficiency that is two-times better than previous state-of-the-art RRAM-CIM chips across various computational bit-precisions, and inference accuracy comparable to software models quantized to four-bit weights across various AI tasks, including accuracy of 99.0 percent on MNIST18 and 85.7 percent on CIFAR-1019 image classification, 84.7-percent accuracy on Google speech command recognition20, and a 70-percent reduction in image-reconstruction error on a Bayesian image-recovery task.
APA, Harvard, Vancouver, ISO, and other styles
13

Rizal, Achmad, Sugondo Hadiyoso, and Ahmad Zaky Ramdani. "FPGA-Based Implementation for Real-Time Epileptic EEG Classification Using Hjorth Descriptor and KNN." Electronics 11, no. 19 (September 23, 2022): 3026. http://dx.doi.org/10.3390/electronics11193026.

Full text
Abstract:
The EEG is one of the main medical instruments used by clinicians in the analysis and diagnosis of epilepsy through visual observations or computers. Visual inspection is difficult, time-consuming, and cannot be conducted in real time. Therefore, we propose a digital system for the classification of epileptic EEG in real time on a Field Programmable Gate Array (FPGA). The implemented digital system comprised a communication interface, feature extraction, and classifier model functions. The Hjorth descriptor method was used for feature extraction of activity, mobility, and complexity, with KNN was utilized as a predictor in the classification stage. The proposed system, run on a The Zynq-7000 FPGA device, can generate up to 90.74% accuracy in normal, inter-ictal, and ictal EEG classifications. FPGA devices provided classification results within 0.015 s. The total memory LUT resource used was less than 10%. This system is expected to tackle problems in visual inspection and computer processing to help detect epileptic EEG using low-cost resources while retaining high performance and real-time implementation.
APA, Harvard, Vancouver, ISO, and other styles
14

Chang, Juneseo, Myeongjin Kang, and Daejin Park. "Low-Power On-Chip Implementation of Enhanced SVM Algorithm for Sensors Fusion-Based Activity Classification in Lightweighted Edge Devices." Electronics 11, no. 1 (January 3, 2022): 139. http://dx.doi.org/10.3390/electronics11010139.

Full text
Abstract:
Smart homes assist users by providing convenient services from activity classification with the help of machine learning (ML) technology. However, most of the conventional high-performance ML algorithms require relatively high power consumption and memory usage due to their complex structure. Moreover, previous studies on lightweight ML/DL models for human activity classification still require relatively high resources for extremely resource-limited embedded systems; thus, they are inapplicable for smart homes’ embedded system environments. Therefore, in this study, we propose a low-power, memory-efficient, high-speed ML algorithm for smart home activity data classification suitable for an extremely resource-constrained environment. We propose a method for comprehending smart home activity data as image data, hence using the MNIST dataset as a substitute for real-world activity data. The proposed ML algorithm consists of three parts: data preprocessing, training, and classification. In data preprocessing, training data of the same label are grouped into further detailed clusters. The training process generates hyperplanes by accumulating and thresholding from each cluster of preprocessed data. Finally, the classification process classifies input data by calculating the similarity between the input data and each hyperplane using the bitwise-operation-based error function. We verified our algorithm on ‘Raspberry Pi 3’ and ‘STM32 Discovery board’ embedded systems by loading trained hyperplanes and performing classification on 1000 training data. Compared to a linear support vector machine implemented from Tensorflow Lite, the proposed algorithm improved memory usage to 15.41%, power consumption to 41.7%, performance up to 50.4%, and power per accuracy to 39.2%. Moreover, compared to a convolutional neural network model, the proposed model improved memory usage to 15.41%, power consumption to 61.17%, performance to 57.6%, and power per accuracy to 55.4%.
APA, Harvard, Vancouver, ISO, and other styles
15

Sakai, Keita, Mamiko Yagi, Mitsuki Ito, and Jun-ichi Shirakashi. "Multiple connected artificial synapses based on electromigrated Au nanogaps." Journal of Vacuum Science & Technology B 40, no. 5 (September 2022): 053202. http://dx.doi.org/10.1116/6.0002081.

Full text
Abstract:
Building an artificial synaptic device with multiple presynaptic inputs will be a significant step toward realization of sophisticated brain-inspired platforms for neuromorphic computing. However, an artificial synapse that can mimic functions of multiple synapses in a single device has not yet been well developed with existing electronic devices. Here, we experimentally implement the functions of multiple synapses in a single artificial synaptic device consisting of multiple connected nanogap electrodes. The “activation” technique, which is based on electromigration of metal atoms induced by a field emission current, was applied to the device to emulate the synaptic functions. We show that the device, upon application of activation, exhibits conductance changes in response to stimulation voltage, similar to the memory states of biological synapses. Several important synaptic responses—notably, short-term plasticity and long-term plasticity—were successfully demonstrated in multiple connected Au-nanogaps. For further application, a simple network was implemented using multi-input devices based on a two-terminal Au nanogap array, exhibiting the ability to classify the digital input vector pattern. These demonstrations pave the way for brain-inspired computing applications such as associative memory, pattern classification, and image recognition.
APA, Harvard, Vancouver, ISO, and other styles
16

Lee, Taehun, Hae-In Kim, Yoonjin Cho, Sangwoo Lee, Won-Yong Lee, Jin-Hyuk Bae, In-Man Kang, Kwangeun Kim, Sin-Hyung Lee, and Jaewon Jang. "Sol–Gel-Processed Y2O3 Multilevel Resistive Random-Access Memory Cells for Neural Networks." Nanomaterials 13, no. 17 (August 27, 2023): 2432. http://dx.doi.org/10.3390/nano13172432.

Full text
Abstract:
Yttrium oxide (Y2O3) resistive random-access memory (RRAM) devices were fabricated using the sol–gel process on indium tin oxide/glass substrates. These devices exhibited conventional bipolar RRAM characteristics without requiring a high-voltage forming process. The effect of current compliance on the Y2O3 RRAM devices was investigated, and the results revealed that the resistance values gradually decreased with increasing set current compliance values. By regulating these values, the formation of pure Ag conductive filament could be restricted. The dominant oxygen ion diffusion and migration within Y2O3 leads to the formation of oxygen vacancies and Ag metal-mixed conductive filaments between the two electrodes. The filament composition changes from pure Ag metal to Ag metal mixed with oxygen vacancies, which is crucial for realizing multilevel cell (MLC) switching. Consequently, intermediate resistance values were obtained, which were suitable for MLC switching. The fabricated Y2O3 RRAM devices could function as a MLC with a capacity of two bits in one cell, utilizing three low-resistance states and one common high-resistance state. The potential of the Y2O3 RRAM devices for neural networks was further explored through numerical simulations. Hardware neural networks based on the Y2O3 RRAM devices demonstrated effective digit image classification with a high accuracy rate of approximately 88%, comparable to the ideal software-based classification (~92%). This indicates that the proposed RRAM can be utilized as a memory component in practical neuromorphic systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Pérez, Ignacio, and Miguel Figueroa. "A Heterogeneous Hardware Accelerator for Image Classification in Embedded Systems." Sensors 21, no. 8 (April 9, 2021): 2637. http://dx.doi.org/10.3390/s21082637.

Full text
Abstract:
Convolutional neural networks (CNN) have been extensively employed for image classification due to their high accuracy. However, inference is a computationally-intensive process that often requires hardware acceleration to operate in real time. For mobile devices, the power consumption of graphics processors (GPUs) is frequently prohibitive, and field-programmable gate arrays (FPGA) become a solution to perform inference at high speed. Although previous works have implemented CNN inference on FPGAs, their high utilization of on-chip memory and arithmetic resources complicate their application on resource-constrained edge devices. In this paper, we present a scalable, low power, low resource-utilization accelerator architecture for inference on the MobileNet V2 CNN. The architecture uses a heterogeneous system with an embedded processor as the main controller, external memory to store network data, and dedicated hardware implemented on reconfigurable logic with a scalable number of processing elements (PE). Implemented on a XCZU7EV FPGA running at 200 MHz and using four PEs, the accelerator infers with 87% top-5 accuracy and processes an image of 224×224 pixels in 220 ms. It consumes 7.35 W of power and uses less than 30% of the logic and arithmetic resources used by other MobileNet FPGA accelerators.
APA, Harvard, Vancouver, ISO, and other styles
18

Ahmed, O., S. Areibi, and G. Grewal. "Hardware Accelerators Targeting a Novel Group Based Packet Classification Algorithm." International Journal of Reconfigurable Computing 2013 (2013): 1–33. http://dx.doi.org/10.1155/2013/681894.

Full text
Abstract:
Packet classification is a ubiquitous and key building block for many critical network devices. However, it remains as one of the main bottlenecks faced when designing fast network devices. In this paper, we propose a novel Group Based Search packet classification Algorithm (GBSA) that is scalable, fast, and efficient. GBSA consumes an average of 0.4 megabytes of memory for a 10 k rule set. The worst-case classification time per packet is 2 microseconds, and the preprocessing speed is 3 M rules/second based on an Xeon processor operating at 3.4 GHz. When compared with other state-of-the-art classification techniques, the results showed that GBSA outperforms the competition with respect to speed, memory usage, and processing time. Moreover, GBSA is amenable to implementation in hardware. Three different hardware implementations are also presented in this paper including an Application Specific Instruction Set Processor (ASIP) implementation and two pure Register-Transfer Level (RTL) implementations based on Impulse-C and Handel-C flows, respectively. Speedups achieved with these hardware accelerators ranged from 9x to 18x compared with a pure software implementation running on an Xeon processor.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Jinnan, Weiqin Tong, and Xiaoli Zhi. "Model Parallelism Optimization for CNN FPGA Accelerator." Algorithms 16, no. 2 (February 14, 2023): 110. http://dx.doi.org/10.3390/a16020110.

Full text
Abstract:
Convolutional neural networks (CNNs) have made impressive achievements in image classification and object detection. For hardware with limited resources, it is not easy to achieve CNN inference with a large number of parameters without external storage. Model parallelism is an effective way to reduce resource usage by distributing CNN inference among several devices. However, parallelizing a CNN model is not easy, because CNN models have an essentially tightly-coupled structure. In this work, we propose a novel model parallelism method to decouple the CNN structure with group convolution and a new channel shuffle procedure. Our method could eliminate inter-device synchronization while reducing the memory footprint of each device. Using the proposed model parallelism method, we designed a parallel FPGA accelerator for the classic CNN model ShuffleNet. This accelerator was further optimized with features such as aggregate read and kernel vectorization to fully exploit the hardware-level parallelism of the FPGA. We conducted experiments with ShuffleNet on two FPGA boards, each of which had an Intel Arria 10 GX1150 and 16GB DDR3 memory. The experimental results showed that when using two devices, ShuffleNet achieved a 1.42× speed increase and reduced its memory footprint by 34%, as compared to its non-parallel counterpart, while maintaining accuracy.
APA, Harvard, Vancouver, ISO, and other styles
20

Saxena, Vishal, Xinyu Wu, Ira Srivastava, and Kehan Zhu. "Towards Neuromorphic Learning Machines Using Emerging Memory Devices with Brain-Like Energy Efficiency." Journal of Low Power Electronics and Applications 8, no. 4 (October 2, 2018): 34. http://dx.doi.org/10.3390/jlpea8040034.

Full text
Abstract:
The ongoing revolution in Deep Learning is redefining the nature of computing that is driven by the increasing amount of pattern classification and cognitive tasks. Specialized digital hardware for deep learning still holds its predominance due to the flexibility offered by the software implementation and maturity of algorithms. However, it is being increasingly desired that cognitive computing occurs at the edge, i.e., on hand-held devices that are energy constrained, which is energy prohibitive when employing digital von Neumann architectures. Recent explorations in digital neuromorphic hardware have shown promise, but offer low neurosynaptic density needed for scaling to applications such as intelligent cognitive assistants (ICA). Large-scale integration of nanoscale emerging memory devices with Complementary Metal Oxide Semiconductor (CMOS) mixed-signal integrated circuits can herald a new generation of Neuromorphic computers that will transcend the von Neumann bottleneck for cognitive computing tasks. Such hybrid Neuromorphic System-on-a-chip (NeuSoC) architectures promise machine learning capability at chip-scale form factor, and several orders of magnitude improvement in energy efficiency. Practical demonstration of such architectures has been limited as performance of emerging memory devices falls short of the expected behavior from the idealized memristor-based analog synapses, or weights, and novel machine learning algorithms are needed to take advantage of the device behavior. In this article, we review the challenges involved and present a pathway to realize large-scale mixed-signal NeuSoCs, from device arrays and circuits to spike-based deep learning algorithms with ‘brain-like’ energy-efficiency.
APA, Harvard, Vancouver, ISO, and other styles
21

Alahmadi, Abdulhadi, and Tae Sun Chung. "Crash Recovery Techniques for Flash Storage Devices Leveraging Flash Translation Layer: A Review." Electronics 12, no. 6 (March 16, 2023): 1422. http://dx.doi.org/10.3390/electronics12061422.

Full text
Abstract:
The flash storage is a type of nonvolatile semiconductor device that is operated continuously and has been substituting the hard disk or secondary memory in several storage markets, such as PC/laptop computers, mobile devices, and is also used as an enterprise server. Moreover, it offers a number of benefits, including compact size, low power consumption, quick access, easy mobility, heat dissipation, shock tolerance, data preservation during a power outage, and random access. Different embedded system products, including digital cameras, smartphones, personal digital assistants (PDA), along with sensor devices, are currently integrating flash memory. However, as flash memory requires unique capabilities such as “erase before write” as well as “wear-leveling”, a FTL (flash translation layer) is added to the software layer. The FTL software module overcomes the problem of performance that arises from the erase before write operation and wear-leveling, i.e., flash memory does not allow for an in-place update, and therefore a block must be erased prior to overwriting upon the present data. In the meantime, flash storage devices face challenges of failure and thus they must be able to recover metadata (as well as address mapping information), including data after a crash. The FTL layer is responsible for and intended for use in crash recovery. Although the power-off recovery technique is essential for portable devices, most FTL algorithms do not take this into account. In this paper, we review various schemes of crash recovery leveraging FTL for flash storage devices. We illustrate the classification of the FTL algorithms. Moreover, we also discuss the various metrics and parameters evaluated for comparison with other approaches by each scheme, along with the flash type. In addition, we made an analysis of the FTL schemes. We also describe meaningful considerations which play a critical role in the design development for power-off recovery employing FTL.
APA, Harvard, Vancouver, ISO, and other styles
22

Yang, Jingran. "The Application of Deep Learning for Network Traffic Classification." Highlights in Science, Engineering and Technology 39 (April 1, 2023): 979–84. http://dx.doi.org/10.54097/hset.v39i.6689.

Full text
Abstract:
The classification, detection, and analysis of routine network traffic has been a hot topic for businesses and research institutions due to the proliferation of Internet of Things devices and the explosive development of networks. Traditional methods for categorizing network traffic primarily employ common machine learning algorithms e.g., decision trees and plain Bayes algorithms, but as deep learning technology advances, more and more traffic classifications are being successfully applied. This study examines existing deep learning-based network traffic classification techniques and focuses on the categorization of computer network traffic. Firstly, the research background of the topic is introduced, and then the traffic classification based on deep learning is mainly described, which includes traffic classification based on Stacked Autoencoder, traffic classification based on Convolutional Neural Network and traffic classification based on Recurrent Neural Networks. Following investigation, this paper comes to the conclusion that Long Short-Term Memory and Convolutional Neural Network models are the best deep learning models for traffic classification, with three-dimensional Convolutional Neural Network outperforming the others.
APA, Harvard, Vancouver, ISO, and other styles
23

Lu, Peng, Yang Gao, Hao Xi, Yabin Zhang, Chao Gao, Bing Zhou, Hongpo Zhang, Liwei Chen, and Xiaobo Mao. "KecNet: A Light Neural Network for Arrhythmia Classification Based on Knowledge Reinforcement." Journal of Healthcare Engineering 2021 (April 24, 2021): 1–10. http://dx.doi.org/10.1155/2021/6684954.

Full text
Abstract:
Acquiring electrocardiographic (ECG) signals and performing arrhythmia classification in mobile device scenarios have the advantages of short response time, almost no network bandwidth consumption, and human resource savings. In recent years, deep neural networks have become a popular method to efficiently and accurately simulate nonlinear patterns of ECG data in a data-driven manner but require more resources. Therefore, it is crucial to design deep learning (DL) algorithms that are more suitable for resource-constrained mobile devices. In this paper, KecNet, a lightweight neural network construction scheme based on domain knowledge, is proposed to model ECG data by effectively leveraging signal analysis and medical knowledge. To evaluate the performance of KecNet, we use the Association for the Advancement of Medical Instrumentation (AAMI) protocol and the MIT-BIH arrhythmia database to classify five arrhythmia categories. The result shows that the ACC, SEN, and PRE achieve 99.31%, 99.45%, and 98.78%, respectively. In addition, it also possesses high robustness to noisy environments, low memory usage, and physical interpretability advantages. Benefiting from these advantages, KecNet can be applied in practice, especially wearable and lightweight mobile devices for arrhythmia classification.
APA, Harvard, Vancouver, ISO, and other styles
24

Andrighetti, Milena, Giovanna Turvani, Giulia Santoro, Marco Vacca, Andrea Marchesin, Fabrizio Ottati, Massimo Ruo Roch, Mariagrazia Graziano, and Maurizio Zamboni. "Data Processing and Information Classification—An In-Memory Approach." Sensors 20, no. 6 (March 18, 2020): 1681. http://dx.doi.org/10.3390/s20061681.

Full text
Abstract:
To live in the information society means to be surrounded by billions of electronic devices full of sensors that constantly acquire data. This enormous amount of data must be processed and classified. A solution commonly adopted is to send these data to server farms to be remotely elaborated. The drawback is a huge battery drain due to high amount of information that must be exchanged. To compensate this problem data must be processed locally, near the sensor itself. But this solution requires huge computational capabilities. While microprocessors, even mobile ones, nowadays have enough computational power, their performance are severely limited by the Memory Wall problem. Memories are too slow, so microprocessors cannot fetch enough data from them, greatly limiting their performance. A solution is the Processing-In-Memory (PIM) approach. New memories are designed that can elaborate data inside them eliminating the Memory Wall problem. In this work we present an example of such a system, using as a case of study the Bitmap Indexing algorithm. Such algorithm is used to classify data coming from many sources in parallel. We propose a hardware accelerator designed around the Processing-In-Memory approach, that is capable of implementing this algorithm and that can also be reconfigured to do other tasks or to work as standard memory. The architecture has been synthesized using CMOS technology. The results that we have obtained highlights that, not only it is possible to process and classify huge amount of data locally, but also that it is possible to obtain this result with a very low power consumption.
APA, Harvard, Vancouver, ISO, and other styles
25

Sludds, Alexander, Saumil Bandyopadhyay, Zaijun Chen, Zhizhen Zhong, Jared Cochrane, Liane Bernstein, Darius Bunandar, et al. "Delocalized photonic deep learning on the internet’s edge." Science 378, no. 6617 (October 21, 2022): 270–76. http://dx.doi.org/10.1126/science.abq8271.

Full text
Abstract:
Advanced machine learning models are currently impossible to run on edge devices such as smart sensors and unmanned aerial vehicles owing to constraints on power, processing, and memory. We introduce an approach to machine learning inference based on delocalized analog processing across networks. In this approach, named Netcast, cloud-based “smart transceivers” stream weight data to edge devices, enabling ultraefficient photonic inference. We demonstrate image recognition at ultralow optical energy of 40 attojoules per multiply (<1 photon per multiply) at 98.8% (93%) classification accuracy. We reproduce this performance in a Boston-area field trial over 86 kilometers of deployed optical fiber, wavelength multiplexed over 3 terahertz of optical bandwidth. Netcast allows milliwatt-class edge devices with minimal memory and processing to compute at teraFLOPS rates reserved for high-power (>100 watts) cloud computers.
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Juan, Jing Zhong, and Jiangqi Li. "IoT-Portrait: Automatically Identifying IoT Devices via Transformer with Incremental Learning." Future Internet 15, no. 3 (March 7, 2023): 102. http://dx.doi.org/10.3390/fi15030102.

Full text
Abstract:
With the development of IoT, IoT devices have proliferated. With the increasing demands of network management and security evaluation, automatic identification of IoT devices becomes necessary. However, existing works require a lot of manual effort and face the challenge of catastrophic forgetting. In this paper, we propose IoT-Portrait, an automatic IoT device identification framework based on a transformer network. IoT-Portrait automatically acquires information about IoT devices as labels and learns the traffic behavior characteristics of devices through a transformer neural network. Furthermore, for privacy protection and overhead reasons, it is not easy to save all past samples to retrain the classification model when new devices join the network. Therefore, we use a class incremental learning method to train the new model to preserve old classes’ features while learning new devices’ features. We implement a prototype of IoT-Portrait based on our lab environment and open-source database. Experimental results show that IoT-Portrait achieves a high identification rate of up to 99% and is well resistant to catastrophic forgetting with a negligible added cost both in memory and time. It indicates that IoT-Portrait can classify IoT devices effectively and continuously.
APA, Harvard, Vancouver, ISO, and other styles
27

Dawadi, Babu R., Danda B. Rawat, Shashidhar R. Joshi, and Pietro Manzoni. "Intelligent Approach to Network Device Migration Planning towards Software-Defined IPv6 Networks." Sensors 22, no. 1 (December 26, 2021): 143. http://dx.doi.org/10.3390/s22010143.

Full text
Abstract:
Internet and telecom service providers worldwide are facing financial sustainability issues in migrating their existing legacy IPv4 networking system due to backward compatibility issues with the latest generation networking paradigms viz. Internet protocol version 6 (IPv6) and software-defined networking (SDN). Bench marking of existing networking devices is required to identify their status whether the existing running devices are upgradable or need replacement to make them operable with SDN and IPv6 networking so that internet and telecom service providers can properly plan their network migration to optimize capital and operational expenditures for future sustainability. In this paper, we implement “adaptive neuro fuzzy inference system (ANFIS)”, a well-known intelligent approach for network device status identification to classify whether a network device is upgradable or requires replacement. Similarly, we establish a knowledge base (KB) system to store the information of device internetwork operating system (IoS)/firmware version, its SDN, and IPv6 support with end-of-life and end-of-support. For input to ANFIS, device performance metrics such as average CPU utilization, throughput, and memory capacity are retrieved and mapped with data from KB. We run the experiment with other well-known classification methods, for example, support vector machine (SVM), fine tree, and liner regression to compare performance results with ANFIS. The comparative results show that the ANFIS-based classification approach is more accurate and optimal than other methods. For service providers with a large number of network devices, this approach assists them to properly classify the device and make a decision for the smooth transitioning to SDN-enabled IPv6 networks.
APA, Harvard, Vancouver, ISO, and other styles
28

Герасимов, I. Gerasimov, Яшин, and A. Yashin. "Ion-Molecular Memory Model. Basic Notions. Types of Memory (review)." Journal of New Medical Technologies 20, no. 4 (December 20, 2013): 165–70. http://dx.doi.org/10.12737/2754.

Full text
Abstract:
The review presents the history of the known approaches, concepts and theories of memory, first of all the human, as properties perceive, save, retrieve and reproduce information important for life. The review is written with a specific aim designation: precedes the developed author´s concept of ion-molecular memory model. In the introduction, the authors note that it is reasonably consider memory as a property and the living and non-living objects. Definition of structural memory is presented. It is noted that the review is dedicated to the human memory as biological (according to I.P. Amsharin) - the supreme manifestation of the nature of bio-objects. The authors give a basic definition of the memory elements as information operand: receivers, analyzers, analytical systems, selectors, transmitters, storage devices, media, and library memory. Classification of types of memory as conceptual, oriented to the task research: creation of ion-molecular memory model is presented. As an example, the authors present the definition of the classification of memory on the parameter of time storage of the information. In the aspect of the review of the existing models of memory the authors identified three basic types which simulate associative (distributed) memory, the so-called working memory, i.e. operational situational memory, and other, different, memory models: from temporary to sensory memory. In conclusion, it is shown that in the memory modelling the authors used various mathematical and physical principles: neural networks, holography, fractals, and many sections of non-linear dynamics. The content of this review is based on the analysis of numerous literary sources.
APA, Harvard, Vancouver, ISO, and other styles
29

Sun, Wookyung, Sujin Choi, Bokyung Kim, and Hyungsoon Shin. "Effect of Initial Synaptic State on Pattern Classification Accuracy of 3D Vertical Resistive Random Access Memory (VRRAM) Synapses." Journal of Nanoscience and Nanotechnology 20, no. 8 (August 1, 2020): 4730–34. http://dx.doi.org/10.1166/jnn.2020.17798.

Full text
Abstract:
Amidst the considerable attention artificial intelligence (AI) has attracted in recent years, a neuromorphic chip that mimics the biological neuron has emerged as a promising technology. Memristor or Resistive random-access memory (RRAM) is widely used to implement a synaptic device. Recently, 3D vertical RRAM (VRRAM) has become a promising candidate to reducing resistive memory bit cost. This study investigates the operation principle of synapse in 3D VRRAM architecture. In these devices, the classification response current through a vertical pillar is set by applying a training algorithm to the memristors. The accuracy of neural networks with 3D VRRAM synapses was verified by using the HSPICE simulator to classify the alphabet in 7×7 character images. This simulation demonstrated that 3D VRRAMs are usable as synapses in a neural network system and that a 3D VRRAM synapse should be designed to consider the initial value of the memristor to prepare the training conditions for high classification accuracy. These results mean that a synaptic circuit using 3D VRRAM will become a key technology for implementing neural computing hardware.
APA, Harvard, Vancouver, ISO, and other styles
30

Abbasi, Mahdi, Navid Mousavi, Milad Rafiee, Mohammad R. Khosravi, and Varun G. Menon. "A CRC-Based Classifier Micro-Engine for Efficient Flow Processing in SDN-Based Internet of Things." Mobile Information Systems 2020 (May 18, 2020): 1–8. http://dx.doi.org/10.1155/2020/7641073.

Full text
Abstract:
In the Internet of things (IoT), network devices and mobile systems should exchange a considerable amount of data with negligible delays. For this purpose, the community has used the software-defined networking (SDN), which has provided high-speed flow-based communication mechanisms. To satisfy the requirements of SDN in the classification of communicated packets, high-throughput packet classification systems are needed. A hardware-based method of Internet packet classification that could be simultaneously high-speed and memory-aware has been proved to be able to fill the gap between the network speed and the processing speed of the systems on the network in traffics higher than 100 Gbps. The current architectures, however, have not been successful in achieving these two goals. This paper proposes the architecture of a processing micro-core for packet classification in high-speed, flow-based network systems. By using the hashing technique, this classifying micro-core fixes the length of the rules field. As a result, with a combination of SRAM and BRAM memory cells and implementation of two ports on Virtex®6 FPGAs, the memory usage of 14.5 bytes per rule and a throughput of 324 Mpps were achieved in our experiments. Also, the performance per memory of the proposed design is the highest as compared to its major counterparts and is able to simultaneously meet the speed and memory-usage criteria.
APA, Harvard, Vancouver, ISO, and other styles
31

Morgado, Ana C., Catarina Andrade, Luís F. Teixeira, and Maria João M. Vasconcelos. "Incremental Learning for Dermatological Imaging Modality Classification." Journal of Imaging 7, no. 9 (September 7, 2021): 180. http://dx.doi.org/10.3390/jimaging7090180.

Full text
Abstract:
With the increasing adoption of teledermatology, there is a need to improve the automatic organization of medical records, being dermatological image modality a key filter in this process. Although there has been considerable effort in the classification of medical imaging modalities, this has not been in the field of dermatology. Moreover, as various devices are used in teledermatological consultations, image acquisition conditions may differ. In this work, two models (VGG-16 and MobileNetV2) were used to classify dermatological images from the Portuguese National Health System according to their modality. Afterwards, four incremental learning strategies were applied to these models, namely naive, elastic weight consolidation, averaged gradient episodic memory, and experience replay, enabling their adaptation to new conditions while preserving previously acquired knowledge. The evaluation considered catastrophic forgetting, accuracy, and computational cost. The MobileNetV2 trained with the experience replay strategy, with 500 images in memory, achieved a global accuracy of 86.04% with only 0.0344 of forgetting, which is 6.98% less than the second-best strategy. Regarding efficiency, this strategy took 56 s per epoch longer than the baseline and required, on average, 4554 megabytes of RAM during training. Promising results were achieved, proving the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
32

Ahanger, Tariq Ahamed, Abdulaziz Aldaej, Mohammed Atiquzzaman, Imdad Ullah, and Muhammad Yousufudin. "Federated Learning-Inspired Technique for Attack Classification in IoT Networks." Mathematics 10, no. 12 (June 20, 2022): 2141. http://dx.doi.org/10.3390/math10122141.

Full text
Abstract:
More than 10-billion physical items are being linked to the internet to conduct activities more independently and with less human involvement owing to the Internet of Things (IoT) technology. IoT networks are considered a source of identifiable data for vicious attackers to carry out criminal actions using automated processes. Machine learning (ML)-assisted methods for IoT security have gained much attention in recent years. However, the ML-training procedure incorporates large data which is transferable to the central server since data are created continually by IoT devices at the edge. In other words, conventional ML relies on a single server to store all of its data, which makes it a less desirable option for domains concerned about user privacy. The Federated Learning (FL)-based anomaly detection technique, which utilizes decentralized on-device data to identify IoT network intrusions, represents the proposed solution to the aforementioned problem. By exchanging updated weights with the centralized FL-server, the data are kept on local IoT devices while federating training cycles over GRUs (Gated Recurrent Units) models. The ensemble module of the technique assesses updates from several sources for improving the accuracy of the global ML technique. Experiments have shown that the proposed method surpasses the state-of-the-art techniques in protecting user data by registering enhanced performance measures of Statistical Analysis, Energy Efficiency, Memory Utilization, Attack Classification, and Client Accuracy Analysis for the identification of attacks.
APA, Harvard, Vancouver, ISO, and other styles
33

Biffi, E., D. Ghezzi, A. Pedrocchi, and G. Ferrigno. "Development and Validation of a Spike Detection and Classification Algorithm Aimed at Implementation on Hardware Devices." Computational Intelligence and Neuroscience 2010 (2010): 1–15. http://dx.doi.org/10.1155/2010/659050.

Full text
Abstract:
Neurons cultured in vitro on MicroElectrode Array (MEA) devices connect to each other, forming a network. To study electrophysiological activity and long term plasticity effects, long period recording and spike sorter methods are needed. Therefore, on-line and real time analysis, optimization of memory use and data transmission rate improvement become necessary. We developed an algorithm for amplitude-threshold spikes detection, whose performances were verified with (a) statistical analysis on both simulated and real signal and (b) Big O Notation. Moreover, we developed a PCA-hierarchical classifier, evaluated on simulated and real signal. Finally we proposed a spike detection hardware design on FPGA, whose feasibility was verified in terms of CLBs number, memory occupation and temporal requirements; once realized, it will be able to execute on-line detection and real time waveform analysis, reducing data storage problems.
APA, Harvard, Vancouver, ISO, and other styles
34

Sun, Zhong, Giacomo Pedretti, Alessandro Bricalli, and Daniele Ielmini. "One-step regression and classification with cross-point resistive memory arrays." Science Advances 6, no. 5 (January 2020): eaay2378. http://dx.doi.org/10.1126/sciadv.aay2378.

Full text
Abstract:
Machine learning has been getting attention in recent years as a tool to process big data generated by the ubiquitous sensors used in daily life. High-speed, low-energy computing machines are in demand to enable real-time artificial intelligence processing of such data. These requirements challenge the current metal-oxide-semiconductor technology, which is limited by Moore’s law approaching its end and the communication bottleneck in conventional computing architecture. Novel computing concepts, architectures, and devices are thus strongly needed to accelerate data-intensive applications. Here, we show that a cross-point resistive memory circuit with feedback configuration can train traditional machine learning algorithms such as linear regression and logistic regression in just one step by computing the pseudoinverse matrix of the data within the memory. One-step learning is further supported by simulations of the prediction of housing price in Boston and the training of a two-layer neural network for MNIST digit recognition.
APA, Harvard, Vancouver, ISO, and other styles
35

Velichko, Andrei. "Neural Network for Low-Memory IoT Devices and MNIST Image Recognition Using Kernels Based on Logistic Map." Electronics 9, no. 9 (September 2, 2020): 1432. http://dx.doi.org/10.3390/electronics9091432.

Full text
Abstract:
This study presents a neural network which uses filters based on logistic mapping (LogNNet). LogNNet has a feedforward network structure, but possesses the properties of reservoir neural networks. The input weight matrix, set by a recurrent logistic mapping, forms the kernels that transform the input space to the higher-dimensional feature space. The most effective recognition of a handwritten digit from MNIST-10 occurs under chaotic behavior of the logistic map. The correlation of classification accuracy with the value of the Lyapunov exponent was obtained. An advantage of LogNNet implementation on IoT devices is the significant savings in memory used. At the same time, LogNNet has a simple algorithm and performance indicators comparable to those of the best resource-efficient algorithms available at the moment. The presented network architecture uses an array of weights with a total memory size from 1 to 29 kB and achieves a classification accuracy of 80.3–96.3%. Memory is saved due to the processor, which sequentially calculates the required weight coefficients during the network operation using the analytical equation of the logistic mapping. The proposed neural network can be used in implementations of artificial intelligence based on constrained devices with limited memory, which are integral blocks for creating ambient intelligence in modern IoT environments. From a research perspective, LogNNet can contribute to the understanding of the fundamental issues of the influence of chaos on the behavior of reservoir-type neural networks.
APA, Harvard, Vancouver, ISO, and other styles
36

Afrooz, Sonia, and Nima Jafari Navimipour. "Memory Designing Using Quantum-Dot Cellular Automata: Systematic Literature Review, Classification and Current Trends." Journal of Circuits, Systems and Computers 26, no. 12 (August 2017): 1730004. http://dx.doi.org/10.1142/s0218126617300045.

Full text
Abstract:
Quantum-dot cellular automata (QCA) has come out as one of the potential computational structures for the emerging nanocomputing systems. It has a large capacity in the development of circuits with high space density and dissipation of low heat and allows faster computers to develop with lower power consumption. The QCA is a new appliance to realize nanolevel digital devices and study and analyze their various parameters. It is also a potential technology for low force and high-density memory plans. Large memory designs in QCA show unique features because of their architectural structure. In QCA-based architectures, memory must be maintained in motion, i.e., the memory state has to be continuously moved through a set of QCA cells. These architectures have different features, such as the number of bits stored in a loop, access type (serial or parallel) and cell arrangement for the memory bank. However, the decisive features of the QCA memory cell design are the number of cells, to put off the use of energy. Although the review and study of the QCA-based memories are very important, there is no complete and systematic literature review about the systematical analyses of the state of the mechanisms in this field. Therefore, there are five main types to provide systematic reviews about the QCA-based memories; including read only memory (ROM), register, flip-flop, content addressable memory (CAM) and random access memory (RAM). Also, it has provided the advantages and disadvantages of the reviewed mechanisms and their important challenges so that some interesting lines for any coming research are provided.
APA, Harvard, Vancouver, ISO, and other styles
37

Kyriakos, Angelos, Elissaios-Alexios Papatheofanous, Charalampos Bezaitis, and Dionysios Reisis. "Resources and Power Efficient FPGA Accelerators for Real-Time Image Classification." Journal of Imaging 8, no. 4 (April 15, 2022): 114. http://dx.doi.org/10.3390/jimaging8040114.

Full text
Abstract:
A plethora of image and video-related applications involve complex processes that impose the need for hardware accelerators to achieve real-time performance. Among these, notable applications include the Machine Learning (ML) tasks using Convolutional Neural Networks (CNNs) that detect objects in image frames. Aiming at contributing to the CNN accelerator solutions, the current paper focuses on the design of Field-Programmable Gate Arrays (FPGAs) for CNNs of limited feature space to improve performance, power consumption and resource utilization. The proposed design approach targets the designs that can utilize the logic and memory resources of a single FPGA device and benefit mainly the edge, mobile and on-board satellite (OBC) computing; especially their image-processing- related applications. This work exploits the proposed approach to develop an FPGA accelerator for vessel detection on a Xilinx Virtex 7 XC7VX485T FPGA device (Advanced Micro Devices, Inc, Santa Clara, CA, USA). The resulting architecture operates on RGB images of size 80×80 or sliding windows; it is trained for the “Ships in Satellite Imagery” and by achieving frequency 270 MHz, completing the inference in 0.687 ms and consuming 5 watts, it validates the approach.
APA, Harvard, Vancouver, ISO, and other styles
38

Huang, Kaizhi, Xinglu Li, Shaoyu Wang, Zengchao Geng, and Ge Niu. "RFID Scheme for IoT Devices Based on LSTM-CNN." Journal of Sensors 2022 (November 18, 2022): 1–9. http://dx.doi.org/10.1155/2022/8122815.

Full text
Abstract:
As an essential branch of physical layer authentication research, radio frequency identification (RFID) has advantages in achieving lightweight and highly reliable authentication. However, in the Internet of Things (IoT) environment, where a large scale of devices are connected to the network, there is an issue that the difference of the RF fingerprints is less distinct among the same type of devices. To this end, in this paper, we propose an RFID scheme for IoT devices based on long-short term memory and convolutional neural network (LSTM-CNN). This scheme combines the excellent learning ability of LSTM and CNN to perceive the context information and extract the local feature of RF data. Specifically, RF data is first fed into LSTM to obtain long-term dependency features containing temporal information. Then, CNN is designed for secondary feature extraction to enlarge RF differences and further used for device classification. The experiment results on the open RF data set ORACLE indicate that the identification accuracy of the proposed scheme can reach over 99%. Compared with other schemes, the performance is improved by 6%-30%.
APA, Harvard, Vancouver, ISO, and other styles
39

Tasci, Mustafa, Ayhan Istanbullu, Selahattin Kosunalp, Teodor Iliev, Ivaylo Stoyanov, and Ivan Beloev. "An Efficient Classification of Rice Variety with Quantized Neural Networks." Electronics 12, no. 10 (May 18, 2023): 2285. http://dx.doi.org/10.3390/electronics12102285.

Full text
Abstract:
Rice, as one of the significant grain products across the world, features a wide range of varieties in terms of usability and efficiency. It may be known with various varieties and regional names depending on the specific locations. To specify a particular rice type, different features are considered, such as shape and color. This study uses an available dataset in Turkey consisting of five different varieties: Ipsala, Arborio, Basmati, Jasmine, and Karacadag. The dataset introduces 75,000 grain images in total; each of the 5 varieties has 15,000 samples with a 256 × 256-pixel dimension. The main contribution of this paper is to create Quantized Neural Network (QNN) models to efficiently classify rice varieties with the purpose of reducing resource usage on edge devices. It is well-known that QNN is a successful method for alleviating high computational costs and power requirements in response to many Deep Learning (DL) algorithms. These advantages of the quantization process have the potential to provide an efficient environment for artificial intelligence applications on microcontroller-driven edge devices. For this purpose, we created eight different QNN networks using the MLP and Lenet-5-based deep learning models with varying quantization levels to be trained by the dataset. With the Lenet-5-based QNN network created at the W3A3 quantization level, a 99.87% classification accuracy level was achieved with only 23.1 Kb memory size used for the parameters. In addition to this tremendous benefit of memory usage, the number of billion transactions per second (GOPs) is 23 times less than similar classification studies.
APA, Harvard, Vancouver, ISO, and other styles
40

Abu Abu Doush, Iyad, and Sanaa Jarrah. "Accessible Interface for Context Awareness in Mobile Devices for Users With Memory Impairment." International Journal of Biomedical and Clinical Engineering 8, no. 2 (July 2019): 1–30. http://dx.doi.org/10.4018/ijbce.2019070101.

Full text
Abstract:
Memory problems usually appear because of aging or may happen because of a brain injury. Such problems prevent the person from performing daily activities. In this paper, the authors propose a framework to develop a smartphone solution to detect and recognize the user context. In order to build the context detection framework, the authors compare three different machine learning techniques (C.4.5, random, and BFTree) in terms of context detection accuracy. Then, the authors use the classification technique with the highest accuracy in a mobile application to help users by detecting their context. The authors develop two interfaces based on the suggested accessibility features for users with memory impairment. Two scenarios are used to evaluate the user interface, and the results prove the applicability and the usability of the proposed context detection framework.
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Xufei. "A Novel ECG Automatic Detection Using LongShort-Term Memory Network and Internet of Things Technology." Journal of Medical Imaging and Health Informatics 11, no. 6 (June 1, 2021): 1592–98. http://dx.doi.org/10.1166/jmihi.2021.3684.

Full text
Abstract:
The early detection of cardiovascular diseases based on electrocardiogram (ECG) is very important for the timely treatment of cardiovascular patients, which increases the survival rate of patients. ECG is a visual representation that describes changes in cardiac bioelectricity and is the basis for detecting heart health. With the rise of edge machine learning and Internet of Things (IoT) technologies, small machine learning models have received attention. This study proposes an ECG automatic classification method based on Internet of Things technology and LSTM network to achieve early monitoring and early prevention of cardiovascular diseases. Specifically, this paper first proposes a single-layer bidirectional LSTM network structure. Make full use of the timing-dependent features of the sampling points before and after to automatically extract features. The network structure is more lightweight and the calculation complexity is lower. In order to verify the effectiveness of the proposed classification model, the relevant comparison algorithm is used to verify on the MIT-BIH public data set. Secondly, the model is embedded in a wearable device to automatically classify the collected ECG. Finally, when an abnormality is detected, the user is alerted by an alarm. The experimental results show that the proposed model has a simple structure and a high classification and recognition rate, which can meet the needs of wearable devices for monitoring ECG of patients.
APA, Harvard, Vancouver, ISO, and other styles
42

Deshmukh, Akshata, and Dr Tanuja Pattanshetti. "Deep Learning Technique to Identify the Malicious Traffic in Fog based IoT Networks." International Journal of Innovative Technology and Exploring Engineering 11, no. 8 (July 30, 2022): 59–66. http://dx.doi.org/10.35940/ijitee.h9179.0711822.

Full text
Abstract:
The network of devices known as the Internet of Things (IoT) consists of hardware with sensors and software. These devices communicate and exchange data through the internet. IoT device-based data exchanges are often processed at cloud servers. Since the number of edge devices and quantity of data exchanged is increasing, massive latency-related concerns are observed. The answer to these issues is fog computing technology. Fog computing layer is introduced between the edge devices and cloud servers. Edge devices can conveniently access data from the fog servers. Security of fog layer devices is a major concern. As it provides easy access to different resources, it is more vulnerable to different attacks. In this paper, a deep learning-based intrusion detection approach called Multi-LSTM Aggregate Classifier is proposed to identify malicious traffic for the fog-based IoT network. The MLAC approach contains a set of long short-term memory (LSTM) modules. The final outcomes of these modules are aggregated using a Random Forest to produce the final outcome. Network intrusion dataset UNSW-NB15 is used to evaluate performance of the MLAC technique. For binary classification accuracy of 89.40% has been achieved using the proposed deep learning-based MLAC model.
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Yajie, Xiaomei Zhang, and Haomin Hu. "Continuous User Authentication on Multiple Smart Devices." Information 14, no. 5 (May 5, 2023): 274. http://dx.doi.org/10.3390/info14050274.

Full text
Abstract:
Recent developments in the mobile and intelligence industry have led to an explosion in the use of multiple smart devices such as smartphones, tablets, smart bracelets, etc. To achieve lasting security after initial authentication, many studies have been conducted to apply user authentication through behavioral biometrics. However, few of them consider continuous user authentication on multiple smart devices. In this paper, we investigate user authentication from a new perspective—continuous authentication on multi-devices, that is, continuously authenticating users after both initial access to one device and transfer to other devices. In contrast to previous studies, we propose a continuous user authentication method that exploits behavioral biometric identification on multiple smart devices. In this study, we consider the sensor data captured by accelerometer and gyroscope sensors on both smartphones and tablets. Furthermore, multi-device behavioral biometric data are utilized as the input of our optimized neural network model, which combines a convolutional neural network (CNN) and a long short-term memory (LSTM) network. In particular, we construct two-dimensional domain images to characterize the underlying features of sensor signals between different devices and then input them into our network for classification. In order to strengthen the effectiveness and efficiency of authentication on multiple devices, we introduce an adaptive confidence-based strategy by taking historical user authentication results into account. This paper evaluates the performance of our multi-device continuous user authentication mechanism under different scenarios, and extensive empirical results demonstrate its feasibility and efficiency. Using the mechanism, we achieved mean accuracies of 99.8% and 99.2% for smartphones and tablets, respectively, in approximately 2.3 s, which shows that it authenticates users accurately and quickly.
APA, Harvard, Vancouver, ISO, and other styles
44

Ajerla, Dharmitha, Sazia Mahfuz, and Farhana Zulkernine. "A Real-Time Patient Monitoring Framework for Fall Detection." Wireless Communications and Mobile Computing 2019 (September 22, 2019): 1–13. http://dx.doi.org/10.1155/2019/9507938.

Full text
Abstract:
Fall detection is a major problem in the healthcare department. Elderly people are more prone to fall than others. There are more than 50% of injury-related hospitalizations in people aged over 65. Commercial fall detection devices are expensive and charge a monthly fee for their services. A more affordable and adaptable system is necessary for retirement homes and clinics to build a smart city powered by IoT and artificial intelligence. An effective fall detection system would detect a fall and send an alarm to the appropriate authorities. We propose a framework that uses edge computing where instead of sending data to the cloud, wearable devices send data to a nearby edge device like a laptop or mobile device for real-time analysis. We use cheap wearable sensor devices from MbientLab, an open source streaming engine called Apache Flink for streaming data analytics, and a long short-term memory (LSTM) network model for fall classification. The model is trained using a published dataset called “MobiAct.” Using the trained model, we analyse optimal sampling rates, sensor placement, and multistream data correction. Our edge computing framework can perform real-time streaming data analytics to detect falls with an accuracy of 95.8%.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhang, Hai-Tian, Tae Joon Park, A. N. M. Nafiul Islam, Dat S. J. Tran, Sukriti Manna, Qi Wang, Sandip Mondal, et al. "Reconfigurable perovskite nickelate electronics for artificial intelligence." Science 375, no. 6580 (February 4, 2022): 533–39. http://dx.doi.org/10.1126/science.abj7943.

Full text
Abstract:
Reconfigurable devices offer the ability to program electronic circuits on demand. In this work, we demonstrated on-demand creation of artificial neurons, synapses, and memory capacitors in post-fabricated perovskite NdNiO 3 devices that can be simply reconfigured for a specific purpose by single-shot electric pulses. The sensitivity of electronic properties of perovskite nickelates to the local distribution of hydrogen ions enabled these results. With experimental data from our memory capacitors, simulation results of a reservoir computing framework showed excellent performance for tasks such as digit recognition and classification of electrocardiogram heartbeat activity. Using our reconfigurable artificial neurons and synapses, simulated dynamic networks outperformed static networks for incremental learning scenarios. The ability to fashion the building blocks of brain-inspired computers on demand opens up new directions in adaptive networks.
APA, Harvard, Vancouver, ISO, and other styles
46

Gong, An, Xingyu Zhang, Yu Wang, Yongan Zhang, and Mengyan Li. "Hybrid Data Augmentation and Dual-Stream Spatiotemporal Fusion Neural Network for Automatic Modulation Classification in Drone Communications." Drones 7, no. 6 (May 25, 2023): 346. http://dx.doi.org/10.3390/drones7060346.

Full text
Abstract:
Automatic modulation classification (AMC) is one of the most important technologies in various communication systems, including drone communications. It can be applied to confirm the legitimacy of access devices, help drone systems better identify and track signals from other communication devices, and prevent drone interference to ensure the safety and reliability of communication. However, the classification performance of previously proposed AMC approaches still needs to be improved. In this study, a dual-stream spatiotemporal fusion neural network (DSSFNN)-based AMC approach is proposed to enhance the classification accuracy for the purpose of aiding drone communication because SDDFNN can effectively mine spatiotemporal features from modulation signals through residual modules, long-short term memory (LSTM) modules, and attention mechanisms. In addition, a novel hybrid data augmentation method based on phase shift and self-perturbation is introduced to further improve performance and avoid overfitting. The experimental results demonstrate that the proposed AMC approach can achieve an average classification accuracy of 63.44%, and the maximum accuracy can reach 95.01% at SNR = 10 dB, which outperforms the previously proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Feng, Kai, Xitian Pi, Hongying Liu, and Kai Sun. "Myocardial Infarction Classification Based on Convolutional Neural Network and Recurrent Neural Network." Applied Sciences 9, no. 9 (May 7, 2019): 1879. http://dx.doi.org/10.3390/app9091879.

Full text
Abstract:
Myocardial infarction is one of the most threatening cardiovascular diseases for human beings. With the rapid development of wearable devices and portable electrocardiogram (ECG) medical devices, it is possible and conceivable to detect and monitor myocardial infarction ECG signals in time. This paper proposed a multi-channel automatic classification algorithm combining a 16-layer convolutional neural network (CNN) and long-short term memory network (LSTM) for I-lead myocardial infarction ECG. The algorithm preprocessed the raw data to first extract the heartbeat segments; then it was trained in the multi-channel CNN and LSTM to automatically learn the acquired features and complete the myocardial infarction ECG classification. We utilized the Physikalisch-Technische Bundesanstalt (PTB) database for algorithm verification, and obtained an accuracy rate of 95.4%, a sensitivity of 98.2%, a specificity of 86.5%, and an F1 score of 96.8%, indicating that the model can achieve good classification performance without complex handcrafted features.
APA, Harvard, Vancouver, ISO, and other styles
48

Andreadis, Alessandro, Giovanni Giambene, and Riccardo Zambon. "Monitoring Illegal Tree Cutting through Ultra-Low-Power Smart IoT Devices." Sensors 21, no. 22 (November 16, 2021): 7593. http://dx.doi.org/10.3390/s21227593.

Full text
Abstract:
Forests play a fundamental role in preserving the environment and fighting global warming. Unfortunately, they are continuously reduced by human interventions such as deforestation, fires, etc. This paper proposes and evaluates a framework for automatically detecting illegal tree-cutting activity in forests through audio event classification. We envisage ultra-low-power tiny devices, embedding edge-computing microcontrollers and long-range wireless communication to cover vast areas in the forest. To reduce the energy footprint and resource consumption for effective and pervasive detection of illegal tree cutting, an efficient and accurate audio classification solution based on convolutional neural networks is proposed, designed specifically for resource-constrained wireless edge devices. With respect to previous works, the proposed system allows for recognizing a wider range of threats related to deforestation through a distributed and pervasive edge-computing technique. Different pre-processing techniques have been evaluated, focusing on a trade-off between classification accuracy with respect to computational resources, memory, and energy footprint. Furthermore, experimental long-range communication tests have been conducted in real environments. Data obtained from the experimental results show that the proposed solution can detect and notify tree-cutting events for efficient and cost-effective forest monitoring through smart IoT, with an accuracy of 85%.
APA, Harvard, Vancouver, ISO, and other styles
49

Biswal, Manas Ranjan, Tahesin Samira Delwar, Abrar Siddique, Prangyadarsini Behera, Yeji Choi, and Jee-Youl Ryu. "Pattern Classification Using Quantized Neural Networks for FPGA-Based Low-Power IoT Devices." Sensors 22, no. 22 (November 10, 2022): 8694. http://dx.doi.org/10.3390/s22228694.

Full text
Abstract:
With the recent growth of the Internet of Things (IoT) and the demand for faster computation, quantized neural networks (QNNs) or QNN-enabled IoT can offer better performance than conventional convolution neural networks (CNNs). With the aim of reducing memory access costs and increasing the computation efficiency, QNN-enabled devices are expected to transform numerous industrial applications with lower processing latency and power consumption. Another form of QNN is the binarized neural network (BNN), which has 2 bits of quantized levels. In this paper, CNN-, QNN-, and BNN-based pattern recognition techniques are implemented and analyzed on an FPGA. The FPGA hardware acts as an IoT device due to connectivity with the cloud, and QNN and BNN are considered to offer better performance in terms of low power and low resource use on hardware platforms. The CNN and QNN implementation and their comparative analysis are analyzed based on their accuracy, weight bit error, RoC curve, and execution speed. The paper also discusses various approaches that can be deployed for optimizing various CNN and QNN models with additionally available tools. The work is performed on the Xilinx Zynq 7020 series Pynq Z2 board, which serves as our FPGA-based low-power IoT device. The MNIST and CIFAR-10 databases are considered for simulation and experimentation. The work shows that the accuracy is 95.5% and 79.22% for the MNIST and CIFAR-10 databases, respectively, for full precision (32-bit), and the execution time is 5.8 ms and 18 ms for the MNIST and CIFAR-10 databases, respectively, for full precision (32-bit).
APA, Harvard, Vancouver, ISO, and other styles
50

Abubakar, Adamu, Teddy Mantoro, Sardjoeni Moedjiono, Media Anugerah Ayu, Haruna Chiroma, Ahmad Waqas, Shafi’i Muhammad Abdulhamid, Mukhtar Fatihu Hamza, and Abdulsalam Ya'u Gital. "A Support Vector Machine Classification of Computational Capabilities of 3D Map on Mobile Device for Navigation Aid." International Journal of Interactive Mobile Technologies (iJIM) 10, no. 3 (July 26, 2016): 4. http://dx.doi.org/10.3991/ijim.v10i3.5056.

Full text
Abstract:
3D map for mobile devices provide more realistic view of an environment and serves as better navigation aid. Previous research studies shows differences in 3D maps effect on acquiring of spatial knowledge. This is attributed to the differences in mobile device computational capabilities. Crucial to this, is the time it takes for 3D map dataset to be rendered for a required complete navigation task. Different findings suggest different approach on solving the problem of time require for both in-core (inside mobile) and out-core (remote) rendering of 3D dataset. Unfortunately, studies on analytical techniques required to shows the impact of computational resources required for the use of 3D map on mobile device were neglected by the research communities. This paper uses Support Vector Machine (SVM) to analytically classify mobile device computational capabilities required for 3D map that will be suitable for use as navigation aid. Fifty different Smart phones were categorized on the bases of their Graphical Processing Unit (GPU), display resolution, memory and size. The result of the proposed classification shows high accuracy
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography