Artigos de revistas sobre o tema "Neural network accelerator"

Siga este link para ver outros tipos de publicações sobre o tema: Neural network accelerator.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Neural network accelerator".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Eliahu, Adi, Ronny Ronen, Pierre-Emmanuel Gaillardon e Shahar Kvatinsky. "multiPULPly". ACM Journal on Emerging Technologies in Computing Systems 17, n.º 2 (abril de 2021): 1–27. http://dx.doi.org/10.1145/3432815.

Texto completo da fonte
Resumo:
Computationally intensive neural network applications often need to run on resource-limited low-power devices. Numerous hardware accelerators have been developed to speed up the performance of neural network applications and reduce power consumption; however, most focus on data centers and full-fledged systems. Acceleration in ultra-low-power systems has been only partially addressed. In this article, we present multiPULPly, an accelerator that integrates memristive technologies within standard low-power CMOS technology, to accelerate multiplication in neural network inference on ultra-low-power systems. This accelerator was designated for PULP, an open-source microcontroller system that uses low-power RISC-V processors. Memristors were integrated into the accelerator to enable power consumption only when the memory is active, to continue the task with no context-restoring overhead, and to enable highly parallel analog multiplication. To reduce the energy consumption, we propose novel dataflows that handle common multiplication scenarios and are tailored for our architecture. The accelerator was tested on FPGA and achieved a peak energy efficiency of 19.5 TOPS/W, outperforming state-of-the-art accelerators by 1.5× to 4.5×.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Cho, Jaechan, Yongchul Jung, Seongjoo Lee e Yunho Jung. "Reconfigurable Binary Neural Network Accelerator with Adaptive Parallelism Scheme". Electronics 10, n.º 3 (20 de janeiro de 2021): 230. http://dx.doi.org/10.3390/electronics10030230.

Texto completo da fonte
Resumo:
Binary neural networks (BNNs) have attracted significant interest for the implementation of deep neural networks (DNNs) on resource-constrained edge devices, and various BNN accelerator architectures have been proposed to achieve higher efficiency. BNN accelerators can be divided into two categories: streaming and layer accelerators. Although streaming accelerators designed for a specific BNN network topology provide high throughput, they are infeasible for various sensor applications in edge AI because of their complexity and inflexibility. In contrast, layer accelerators with reasonable resources can support various network topologies, but they operate with the same parallelism for all the layers of the BNN, which degrades throughput performance at certain layers. To overcome this problem, we propose a BNN accelerator with adaptive parallelism that offers high throughput performance in all layers. The proposed accelerator analyzes target layer parameters and operates with optimal parallelism using reasonable resources. In addition, this architecture is able to fully compute all types of BNN layers thanks to its reconfigurability, and it can achieve a higher area–speed efficiency than existing accelerators. In performance evaluation using state-of-the-art BNN topologies, the designed BNN accelerator achieved an area–speed efficiency 9.69 times higher than previous FPGA implementations and 24% higher than existing VLSI implementations for BNNs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Noskova, E. S., I. E. Zakharov, Y. N. Shkandybin e S. G. Rykovanov. "Towards energy-efficient neural network calculations". Computer Optics 46, n.º 1 (fevereiro de 2022): 160–66. http://dx.doi.org/10.18287/2412-6179-co-914.

Texto completo da fonte
Resumo:
Nowadays, the problem of creating high-performance and energy-efficient hardware for Artificial Intelligence tasks is very acute. The most popular solution to this problem is the use of Deep Learning Accelerators, such as GPUs and Tensor Processing Units to run neural networks. Recently, NVIDIA has announced the NVDLA project, which allows one to design neural network accelerators based on an open-source code. This work describes a full cycle of creating a prototype NVDLA accelerator, as well as testing the resulting solution by running the resnet-50 neural network on it. Finally, an assessment of the performance and power efficiency of the prototype NVDLA accelerator when compared to the GPU and CPU is provided, the results of which show the superiority of NVDLA in many characteristics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Hong, JiUn, Saad Arslan, TaeGeon Lee e HyungWon Kim. "Design of Power-Efficient Training Accelerator for Convolution Neural Networks". Electronics 10, n.º 7 (26 de março de 2021): 787. http://dx.doi.org/10.3390/electronics10070787.

Texto completo da fonte
Resumo:
To realize deep learning techniques, a type of deep neural network (DNN) called a convolutional neural networks (CNN) is among the most widely used models aimed at image recognition applications. However, there is growing demand for light-weight and low-power neural network accelerators, not only for inference but also for training process. In this paper, we propose a training accelerator that provides low power and compact chip size targeted for mobile and edge computing applications. It accelerates to achieve the real-time processing of both inference and training using concurrent floating-point data paths. The proposed accelerator can be externally controlled and employs resource sharing and an integrated convolution-pooling block to achieve low area and low energy consumption. We implemented the proposed training accelerator in an FPGA (Field Programmable Gate Array) and evaluated its training performance using an MNIST CNN example in comparison with a PC with GPU (Graphics Processing Unit). While both methods achieved a similar training accuracy of 95.1%, the proposed accelerator, when implemented in a silicon chip, reduced the energy consumption by 480 times compared to the counterpart. Additionally, when implemented on an FPGA, an energy reduction of over 4.5 times was achieved compared to the existing FPGA training accelerator for the MNIST dataset. Therefore, the proposed accelerator is more suitable for deployment in mobile/edge nodes compared to the existing software and hardware accelerators.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ferianc, Martin, Hongxiang Fan, Divyansh Manocha, Hongyu Zhou, Shuanglong Liu, Xinyu Niu e Wayne Luk. "Improving Performance Estimation for Design Space Exploration for Convolutional Neural Network Accelerators". Electronics 10, n.º 4 (23 de fevereiro de 2021): 520. http://dx.doi.org/10.3390/electronics10040520.

Texto completo da fonte
Resumo:
Contemporary advances in neural networks (NNs) have demonstrated their potential in different applications such as in image classification, object detection or natural language processing. In particular, reconfigurable accelerators have been widely used for the acceleration of NNs due to their reconfigurability and efficiency in specific application instances. To determine the configuration of the accelerator, it is necessary to conduct design space exploration to optimize the performance. However, the process of design space exploration is time consuming because of the slow performance evaluation for different configurations. Therefore, there is a demand for an accurate and fast performance prediction method to speed up design space exploration. This work introduces a novel method for fast and accurate estimation of different metrics that are of importance when performing design space exploration. The method is based on a Gaussian process regression model parametrised by the features of the accelerator and the target NN to be accelerated. We evaluate the proposed method together with other popular machine learning based methods in estimating the latency and energy consumption of our implemented accelerator on two different hardware platforms targeting convolutional neural networks. We demonstrate improvements in estimation accuracy, without the need for significant implementation effort or tuning.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Sunny, Febin P., Asif Mirza, Mahdi Nikdast e Sudeep Pasricha. "ROBIN: A Robust Optical Binary Neural Network Accelerator". ACM Transactions on Embedded Computing Systems 20, n.º 5s (31 de outubro de 2021): 1–24. http://dx.doi.org/10.1145/3476988.

Texto completo da fonte
Resumo:
Domain specific neural network accelerators have garnered attention because of their improved energy efficiency and inference performance compared to CPUs and GPUs. Such accelerators are thus well suited for resource-constrained embedded systems. However, mapping sophisticated neural network models on these accelerators still entails significant energy and memory consumption, along with high inference time overhead. Binarized neural networks (BNNs), which utilize single-bit weights, represent an efficient way to implement and deploy neural network models on accelerators. In this paper, we present a novel optical-domain BNN accelerator, named ROBIN , which intelligently integrates heterogeneous microring resonator optical devices with complementary capabilities to efficiently implement the key functionalities in BNNs. We perform detailed fabrication-process variation analyses at the optical device level, explore efficient corrective tuning for these devices, and integrate circuit-level optimization to counter thermal variations. As a result, our proposed ROBIN architecture possesses the desirable traits of being robust, energy-efficient, low latency, and high throughput, when executing BNN models. Our analysis shows that ROBIN can outperform the best-known optical BNN accelerators and many electronic accelerators. Specifically, our energy-efficient ROBIN design exhibits energy-per-bit values that are ∼4 × lower than electronic BNN accelerators and ∼933 × lower than a recently proposed photonic BNN accelerator, while a performance-efficient ROBIN design shows ∼3 × and ∼25 × better performance than electronic and photonic BNN accelerators, respectively.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Anmin, Kong, e Zhao Bin. "A Parallel Loading Based Accelerator for Convolution Neural Network". International Journal of Machine Learning and Computing 10, n.º 5 (5 de outubro de 2020): 669–74. http://dx.doi.org/10.18178/ijmlc.2020.10.5.989.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Xia, Chengpeng, Yawen Chen, Haibo Zhang, Hao Zhang, Fei Dai e Jigang Wu. "Efficient neural network accelerators with optical computing and communication". Computer Science and Information Systems, n.º 00 (2022): 66. http://dx.doi.org/10.2298/csis220131066x.

Texto completo da fonte
Resumo:
Conventional electronic Artificial Neural Networks (ANNs) accelerators focus on architecture design and numerical computation optimization to improve the training efficiency. However, these approaches have recently encountered bottlenecks in terms of energy efficiency and computing performance, which leads to an increase interest in photonic accelerator. Photonic architectures with low energy consumption, high transmission speed and high bandwidth have been considered as an important role for generation of computing architectures. In this paper, to provide a better understanding of optical technology used in ANN acceleration, we present a comprehensive review for the efficient photonic computing and communication in ANN accelerators. The related photonic devices are investigated in terms of the application in ANNs acceleration, and a classification of existing solutions is proposed that are categorized into optical computing acceleration and optical communication acceleration according to photonic effects and photonic architectures. Moreover, we discuss the challenges for these photonic neural network acceleration approaches to highlight the most promising future research opportunities in this field.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Tang, Wenkai, e Peiyong Zhang. "GPGCN: A General-Purpose Graph Convolution Neural Network Accelerator Based on RISC-V ISA Extension". Electronics 11, n.º 22 (21 de novembro de 2022): 3833. http://dx.doi.org/10.3390/electronics11223833.

Texto completo da fonte
Resumo:
In the past two years, various graph convolution neural networks (GCNs) accelerators have emerged, each with their own characteristics, but their common disadvantage is that the hardware architecture is not programmable and it is optimized for a specific network and dataset. They may not support acceleration for different GCNs and may not achieve optimal hardware resource utilization for datasets of different sizes. Therefore, given the above shortcomings, and according to the development trend of traditional neural network accelerators, this paper proposes and implements GPGCN: a general-purpose GCNs accelerator architecture based on RISC-V instruction set extension, providing the software programming freedom to support acceleration for various GCNs, and achieving the best acceleration efficiency for different GCNs with different datasets. Compared with traditional CPU, and traditional CPU with vector expansion, GPGCN achieves above 1001×, 267× speedup for GCN with the Cora dataset. Compared with dedicated accelerators, GPGCN has software programmability and supports the acceleration of more GCNs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

An, Fubang, Lingli Wang e Xuegong Zhou. "A High Performance Reconfigurable Hardware Architecture for Lightweight Convolutional Neural Network". Electronics 12, n.º 13 (27 de junho de 2023): 2847. http://dx.doi.org/10.3390/electronics12132847.

Texto completo da fonte
Resumo:
Since the lightweight convolutional neural network EfficientNet was proposed by Google in 2019, the series of models have quickly become very popular due to their superior performance with a small number of parameters. However, the existing convolutional neural network hardware accelerators for EfficientNet still have much room to improve the performance of the depthwise convolution, squeeze-and-excitation module and nonlinear activation functions. In this paper, we first design a reconfigurable register array and computational kernel to accelerate the depthwise convolution. Next, we propose a vector unit to implement the nonlinear activation functions and the scale operation. An exchangeable-sequence dual-computational kernel architecture is proposed to improve the performance and the utilization. In addition, the memory architectures are designed to complete the hardware accelerator for the above computing architecture. Finally, in order to evaluate the performance of the hardware accelerator, the accelerator is implemented based on Xilinx XCVU37P. The results show that the proposed accelerator can work at the main system clock frequency of 300 MHz with the DSP kernel at 600 MHz. The performance of EfficientNet-B3 in our architecture can reach 69.50 FPS and 255.22 GOPS. Compared with the latest EfficientNet-B3 accelerator, which uses the same FPGA development board, the accelerator proposed in this paper can achieve a 1.28-fold improvement of single-core performance and 1.38-fold improvement of performance of each DSP.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Biookaghazadeh, Saman, Pravin Kumar Ravi e Ming Zhao. "Toward Multi-FPGA Acceleration of the Neural Networks". ACM Journal on Emerging Technologies in Computing Systems 17, n.º 2 (abril de 2021): 1–23. http://dx.doi.org/10.1145/3432816.

Texto completo da fonte
Resumo:
High-throughput and low-latency Convolutional Neural Network (CNN) inference is increasingly important for many cloud- and edge-computing applications. FPGA-based acceleration of CNN inference has demonstrated various benefits compared to other high-performance devices such as GPGPUs. Current FPGA CNN-acceleration solutions are based on a single FPGA design, which are limited by the available resources on an FPGA. In addition, they can only accelerate conventional 2D neural networks. To address these limitations, we present a generic multi-FPGA solution, written in OpenCL, which can accelerate more complex CNNs (e.g., C3D CNN) and achieve a near linear speedup with respect to the available single-FPGA solutions. The design is built upon the Intel Deep Learning Accelerator architecture, with three extensions. First, it includes updates for better area efficiency (up to 25%) and higher performance (up to 24%). Second, it supports 3D convolutions for more challenging applications such as video learning. Third, it supports multi-FPGA communication for higher inference throughput. The results show that utilizing multiple FPGAs can linearly increase the overall bandwidth while maintaining the same end-to-end latency. In addition, the design can outperform other FPGA 2D accelerators by up to 8.4 times and 3D accelerators by up to 1.7 times.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Ge, Fen, Ning Wu, Hao Xiao, Yuanyuan Zhang e Fang Zhou. "Compact Convolutional Neural Network Accelerator for IoT Endpoint SoC". Electronics 8, n.º 5 (5 de maio de 2019): 497. http://dx.doi.org/10.3390/electronics8050497.

Texto completo da fonte
Resumo:
As a classical artificial intelligence algorithm, the convolutional neural network (CNN) algorithm plays an important role in image recognition and classification and is gradually being applied in the Internet of Things (IoT) system. A compact CNN accelerator for the IoT endpoint System-on-Chip (SoC) is proposed in this paper to meet the needs of CNN computations. Based on analysis of the CNN structure, basic functional modules of CNN such as convolution circuit and pooling circuit with a low data bandwidth and a smaller area are designed, and an accelerator is constructed in the form of four acceleration chains. After the acceleration unit design is completed, the Cortex-M3 is used to construct a verification SoC and the designed verification platform is implemented on the FPGA to evaluate the resource consumption and performance analysis of the CNN accelerator. The CNN accelerator achieved a throughput of 6.54 GOPS (giga operations per second) by consuming 4901 LUTs without using any hardware multipliers. The comparison shows that the compact accelerator proposed in this paper makes the CNN computational power of the SoC based on the Cortex-M3 kernel two times higher than the quad-core Cortex-A7 SoC and 67% of the computational power of eight-core Cortex-A53 SoC.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Clements, Joseph, e Yingjie Lao. "DeepHardMark: Towards Watermarking Neural Network Hardware". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 4 (28 de junho de 2022): 4450–58. http://dx.doi.org/10.1609/aaai.v36i4.20367.

Texto completo da fonte
Resumo:
This paper presents a framework for embedding watermarks into DNN hardware accelerators. Unlike previous works that have looked at protecting the algorithmic intellectual properties of deep learning systems, this work proposes a methodology for defending deep learning hardware. Our methodology embeds modifications into the hardware accelerator's functional blocks that can be revealed with the rightful owner's key DNN and corresponding key sample, verifying the legitimate owner. We propose an Lp-box ADMM based algorithm to co-optimize watermark's hardware overhead and impact on the design's algorithmic functionality. We evaluate the performance of the hardware watermarking scheme on popular image classifier models using various accelerator designs. Our results demonstrate that the proposed methodology effectively embeds watermarks while preserving the original functionality of the hardware architecture. Specifically, we can successfully embed watermarks into the deep learning hardware and reliably execute a ResNet ImageNet classifiers with an accuracy degradation of only 0.009%
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Xia, Chengpeng, Yawen Chen, Haibo Zhang e Jigang Wu. "STADIA: Photonic Stochastic Gradient Descent for Neural Network Accelerators". ACM Transactions on Embedded Computing Systems 22, n.º 5s (9 de setembro de 2023): 1–23. http://dx.doi.org/10.1145/3607920.

Texto completo da fonte
Resumo:
Deep Neural Networks (DNNs) have demonstrated great success in many fields such as image recognition and text analysis. However, the ever-increasing sizes of both DNN models and training datasets make deep leaning extremely computation- and memory-intensive. Recently, photonic computing has emerged as a promising technology for accelerating DNNs. While the design of photonic accelerators for DNN inference and forward propagation of DNN training has been widely investigated, the architectural acceleration for equally important backpropagation of DNN training has not been well studied. In this paper, we propose a novel silicon photonic-based backpropagation accelerator for high performance DNN training. Specifically, a general-purpose photonic gradient descent unit named STADIA is designed to implement the multiplication, accumulation, and subtraction operations required for computing gradients using mature optical devices including Mach-Zehnder Interferometer (MZI) and Mircoring Resonator (MRR), which can significantly reduce the training latency and improve the energy efficiency of backpropagation. To demonstrate efficient parallel computing, we propose a STADIA-based backpropagation acceleration architecture and design a dataflow by using wavelength-division multiplexing (WDM). We analyze the precision of STADIA by quantifying the precision limitations imposed by losses and noises. Furthermore, we evaluate STADIA with different element sizes by analyzing the power, area and time delay for photonic accelerators based on DNN models such as AlexNet, VGG19 and ResNet. Simulation results show that the proposed architecture STADIA can achieve significant improvement by 9.7× in time efficiency and 147.2× in energy efficiency, compared with the most advanced optical-memristor based backpropagation accelerator.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Wei, Rongshan, Chenjia Li, Chuandong Chen, Guangyu Sun e Minghua He. "Memory Access Optimization of a Neural Network Accelerator Based on Memory Controller". Electronics 10, n.º 4 (10 de fevereiro de 2021): 438. http://dx.doi.org/10.3390/electronics10040438.

Texto completo da fonte
Resumo:
Special accelerator architecture has achieved great success in processor architecture, and it is trending in computer architecture development. However, as the memory access pattern of an accelerator is relatively complicated, the memory access performance is relatively poor, limiting the overall performance improvement of hardware accelerators. Moreover, memory controllers for hardware accelerators have been scarcely researched. We consider that a special accelerator memory controller is essential for improving the memory access performance. To this end, we propose a dynamic random access memory (DRAM) memory controller called NNAMC for neural network accelerators, which monitors the memory access stream of an accelerator and transfers it to the optimal address mapping scheme bank based on the memory access characteristics. NNAMC includes a stream access prediction unit (SAPU) that analyzes the type of data stream accessed by the accelerator via hardware, and designs the address mapping for different banks using a bank partitioning model (BPM). The image mapping method and hardware architecture were analyzed in a practical neural network accelerator. In the experiment, NNAMC achieved significantly lower access latency of the hardware accelerator than the competing address mapping schemes, increased the row buffer hit ratio by 13.68% on average (up to 26.17%), reduced the system access latency by 26.3% on average (up to 37.68%), and lowered the hardware cost. In addition, we also confirmed that NNAMC efficiently adapted to different network parameters.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Hu, Jian, Xianlong Zhang e Xiaohua Shi. "Simulating Neural Network Processors". Wireless Communications and Mobile Computing 2022 (23 de fevereiro de 2022): 1–12. http://dx.doi.org/10.1155/2022/7500195.

Texto completo da fonte
Resumo:
Deep learning has achieved competing results compared with human beings in many fields. Traditionally, deep learning networks are executed on CPUs and GPUs. In recent years, more and more neural network accelerators have been introduced in both academia and industry to improve the performance and energy efficiency for deep learning networks. In this paper, we introduce a flexible and configurable functional NN accelerator simulator, which could be configured to simulate u-architectures for different NN accelerators. The extensible and configurable simulator is helpful for system-level exploration of u-architecture, as well as operator optimization algorithm developments. The simulator is a functional simulator that simulates the latencies of calculation and memory access and the concurrent process between modules, and it gives the number of program execution cycles after the simulation is completed. We also integrated the simulator into the TVM compilation stack as an optional backend. Users can use TVM to write operators and execute them on the simulator.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Chen, Weijian, Zhi Qi, Zahid Akhtar e Kamran Siddique. "Resistive-RAM-Based In-Memory Computing for Neural Network: A Review". Electronics 11, n.º 22 (9 de novembro de 2022): 3667. http://dx.doi.org/10.3390/electronics11223667.

Texto completo da fonte
Resumo:
Processing-in-memory (PIM) is a promising architecture to design various types of neural network accelerators as it ensures the efficiency of computation together with Resistive Random Access Memory (ReRAM). ReRAM has now become a promising solution to enhance computing efficiency due to its crossbar structure. In this paper, a ReRAM-based PIM neural network accelerator is addressed, and different kinds of methods and designs of various schemes are discussed. Various models and architectures implemented for a neural network accelerator are determined for research trends. Further, the limitations or challenges of ReRAM in a neural network are also addressed in this review.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Lim, Se-Min, e Sang-Woo Jun. "MobileNets Can Be Lossily Compressed: Neural Network Compression for Embedded Accelerators". Electronics 11, n.º 6 (9 de março de 2022): 858. http://dx.doi.org/10.3390/electronics11060858.

Texto completo da fonte
Resumo:
Although neural network quantization is an imperative technology for the computation and memory efficiency of embedded neural network accelerators, simple post-training quantization incurs unacceptable levels of accuracy degradation on some important models targeting embedded systems, such as MobileNets. While explicit quantization-aware training or re-training after quantization can often reclaim lost accuracy, this is not always possible or convenient. We present an alternative approach to compressing such difficult neural networks, using a novel variant of the ZFP lossy floating-point compression algorithm to compress both model weights and inter-layer activations and demonstrate that it can be efficiently implemented on an embedded FPGA platform. Our ZFP variant, which we call ZFPe, is designed for efficient implementation on embedded accelerators, such as FPGAs, requiring a fraction of chip resources per bandwidth compared to state-of-the-art lossy compression accelerators. ZFPe-compressing the MobileNet V2 model with an 8-bit budget per weight and activation results in significantly higher accuracy compared to 8-bit integer post-training quantization and shows no loss of accuracy, compared to an uncompressed model when given a 12-bit budget per floating-point value. To demonstrate the benefits of our approach, we implement an embedded neural network accelerator on a realistic embedded acceleration platform equipped with the low-power Lattice ECP5-85F FPGA and a 32 MB SDRAM chip. Each ZFPe module consumes less than 6% of LUTs while compressing or decompressing one value per cycle, requiring a fraction of the resources compared to state-of-the-art compression accelerators while completely removing the memory bottleneck of our accelerator.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Afifi, Salma, Febin Sunny, Amin Shafiee, Mahdi Nikdast e Sudeep Pasricha. "GHOST: A Graph Neural Network Accelerator using Silicon Photonics". ACM Transactions on Embedded Computing Systems 22, n.º 5s (9 de setembro de 2023): 1–25. http://dx.doi.org/10.1145/3609097.

Texto completo da fonte
Resumo:
Graph neural networks (GNNs) have emerged as a powerful approach for modelling and learning from graph-structured data. Multiple fields have since benefitted enormously from the capabilities of GNNs, such as recommendation systems, social network analysis, drug discovery, and robotics. However, accelerating and efficiently processing GNNs require a unique approach that goes beyond conventional artificial neural network accelerators, due to the substantial computational and memory requirements of GNNs. The slowdown of scaling in CMOS platforms also motivates a search for alternative implementation substrates. In this paper, we present GHOST , the first silicon-photonic hardware accelerator for GNNs. GHOST efficiently alleviates the costs associated with both vertex-centric and edge-centric operations. It implements separately the three main stages involved in running GNNs in the optical domain, allowing it to be used for the inference of various widely used GNN models and architectures, such as graph convolution networks and graph attention networks. Our simulation studies indicate that GHOST exhibits at least 10.2 × better throughput and 3.8 × better energy efficiency when compared to GPU, TPU, CPU and multiple state-of-the-art GNN hardware accelerators.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Li, Yihang. "Sparse-Aware Deep Learning Accelerator". Highlights in Science, Engineering and Technology 39 (1 de abril de 2023): 305–10. http://dx.doi.org/10.54097/hset.v39i.6544.

Texto completo da fonte
Resumo:
In view of the difficulty of hardware implementation of convolutional neural network computing, most of the previous convolutional neural network accelerator designs focused on solving the bottleneck of computational performance and bandwidth, ignoring the importance of convolutional neural network scarcity for accelerator design. In recent years, there are a few convolutional neural network accelerators that can take advantage of the scarcity, but they are usually difficult to consider in terms of computational flexibility, parallel efficiency and resource overhead. In view of the problem that the application of convolutional neural network (CNN) on the embedded side is limited by real-time, and there is a large degree of sparsity in CNN convolution calculation. This paper summarizes the methods of sparsification from the algorithm level and based on FPGA level. The different methods of sparsification and the research and analysis of different application layers are introduced. The advantages and development trend of sparsification are analyzed and summarized.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Xie, Xiaoru, Mingyu Zhu, Siyuan Lu e Zhongfeng Wang. "Efficient Layer-Wise N:M Sparse CNN Accelerator with Flexible SPEC: Sparse Processing Element Clusters". Micromachines 14, n.º 3 (24 de fevereiro de 2023): 528. http://dx.doi.org/10.3390/mi14030528.

Texto completo da fonte
Resumo:
Recently, the layer-wise N:M fine-grained sparse neural network algorithm (i.e., every M-weights contains N non-zero values) has attracted tremendous attention, as it can effectively reduce the computational complexity with negligible accuracy loss. However, the speed-up potential of this algorithm will not be fully exploited if the right hardware support is lacking. In this work, we design an efficient accelerator for the N:M sparse convolutional neural networks (CNNs) with layer-wise sparse patterns. First, we analyze the performances of different processing element (PE) structures and extensions to construct the flexible PE architecture. Second, the variable sparse convolutional dimensions and sparse ratios are involved in the hardware design. With a sparse PE cluster (SPEC) design, the hardware can efficiently accelerate CNNs with the layer-wise N:M pattern. Finally, we employ the proposed SPEC into the CNN accelerator with flexible network-on-chip and specially designed dataflow. We implement hardware accelerators on Xilinx ZCU102 FPGA and Xilinx VCU118 FPGA and evaluate them with classical CNNs such as Alexnet, VGG-16, and ResNet-50. Compared with existing accelerators designed for structured and unstructured pruned networks, our design achieves the best performance in terms of power efficiency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Yang, Zhi. "Dynamic Logo Design System of Network Media Art Based on Convolutional Neural Network". Mobile Information Systems 2022 (31 de maio de 2022): 1–10. http://dx.doi.org/10.1155/2022/3247229.

Texto completo da fonte
Resumo:
Nowadays, we are in an era of rapid development of Internet technology and unlimited expansion of information dissemination. While the application of new media and digital multimedia has become more popular, it has also brought Earth shaking changes to our life. In order to solve the problem that the traditional static visual image has been difficult to meet people’s needs, a network media art dynamic logo design system based on convolutional neural network is proposed. Firstly, the software and hardware platform related to accelerator development is introduced, the advanced integrated design and calculation IP core are determined as the FPGA hardware accelerator, and the design objectives and requirements of the accelerator system are analyzed. The overall architecture of the accelerator system is designed. 76% of designers believe that the dynamic logo has promoted the corporate image. Then, the function and architecture of IP core are designed based on advanced synthesis, the code structure is standardized, the function is divided, and the operation acceleration is further optimized by using the instruction set of HLS. Finally, the design is integrated by Vivado HLS and Vivado IDE software. The experiment shows that the accelerator system has low power consumption and high resource utilization.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Hosseini, Morteza, e Tinoosh Mohsenin. "Binary Precision Neural Network Manycore Accelerator". ACM Journal on Emerging Technologies in Computing Systems 17, n.º 2 (abril de 2021): 1–27. http://dx.doi.org/10.1145/3423136.

Texto completo da fonte
Resumo:
This article presents a low-power, programmable, domain-specific manycore accelerator, Binarized neural Network Manycore Accelerator (BiNMAC), which adopts and efficiently executes binary precision weight/activation neural network models. Such networks have compact models in which weights are constrained to only 1 bit and can be packed several in one memory entry that minimizes memory footprint to its finest. Packing weights also facilitates executing single instruction, multiple data with simple circuitry that allows maximizing performance and efficiency. The proposed BiNMAC has light-weight cores that support domain-specific instructions, and a router-based memory access architecture that helps with efficient implementation of layers in binary precision weight/activation neural networks of proper size. With only 3.73% and 1.98% area and average power overhead, respectively, novel instructions such as Combined Population-Count-XNOR , Patch-Select , and Bit-based Accumulation are added to the instruction set architecture of the BiNMAC, each of which replaces execution cycles of frequently used functions with 1 clock cycle that otherwise would have taken 54, 4, and 3 clock cycles, respectively. Additionally, customized logic is added to every core to transpose 16×16-bit blocks of memory on a bit-level basis, that expedites reshaping intermediate data to be well-aligned for bitwise operations. A 64-cluster architecture of the BiNMAC is fully placed and routed in 65-nm TSMC CMOS technology, where a single cluster occupies an area of 0.53 mm 2 with an average power of 232 mW at 1-GHz clock frequency and 1.1 V. The 64-cluster architecture takes 36.5 mm 2 area and, if fully exploited, consumes a total power of 16.4 W and can perform 1,360 Giga Operations Per Second (GOPS) while providing full programmability. To demonstrate its scalability, four binarized case studies including ResNet-20 and LeNet-5 for high-performance image classification, as well as a ConvNet and a multilayer perceptron for low-power physiological applications were implemented on BiNMAC. The implementation results indicate that the population-count instruction alone can expedite the performance by approximately 5×. When other new instructions are added to a RISC machine with existing population-count instruction, the performance is increased by 58% on average. To compare the performance of the BiNMAC with other commercial-off-the-shelf platforms, the case studies with their double-precision floating-point models are also implemented on the NVIDIA Jetson TX2 SoC (CPU+GPU). The results indicate that, within a margin of ∼2.1%--9.5% accuracy loss, BiNMAC on average outperforms the TX2 GPU by approximately 1.9× (or 7.5× with fabrication technology scaled) in energy consumption for image classification applications. On low power settings and within a margin of ∼3.7%--5.5% accuracy loss compared to ARM Cortex-A57 CPU implementation, BiNMAC is roughly ∼9.7×--17.2× (or 38.8×--68.8× with fabrication technology scaled) more energy efficient for physiological applications while meeting the application deadline.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Park, Sang-Soo, e Ki-Seok Chung. "CENNA: Cost-Effective Neural Network Accelerator". Electronics 9, n.º 1 (10 de janeiro de 2020): 134. http://dx.doi.org/10.3390/electronics9010134.

Texto completo da fonte
Resumo:
Convolutional neural networks (CNNs) are widely adopted in various applications. State-of-the-art CNN models deliver excellent classification performance, but they require a large amount of computation and data exchange because they typically employ many processing layers. Among these processing layers, convolution layers, which carry out many multiplications and additions, account for a major portion of computation and memory access. Therefore, reducing the amount of computation and memory access is the key for high-performance CNNs. In this study, we propose a cost-effective neural network accelerator, named CENNA, whose hardware cost is reduced by employing a cost-centric matrix multiplication that employs both Strassen’s multiplication and a naïve multiplication. Furthermore, the convolution method using the proposed matrix multiplication can minimize data movement by reusing both the feature map and the convolution kernel without any additional control logic. In terms of throughput, power consumption, and silicon area, the efficiency of CENNA is up to 88 times higher than that of conventional designs for the CNN inference.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Kim, Dongyoung, Junwhan Ahn e Sungjoo Yoo. "ZeNA: Zero-Aware Neural Network Accelerator". IEEE Design & Test 35, n.º 1 (fevereiro de 2018): 39–46. http://dx.doi.org/10.1109/mdat.2017.2741463.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Chen, Tianshi, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen e Olivier Temam. "A High-Throughput Neural Network Accelerator". IEEE Micro 35, n.º 3 (maio de 2015): 24–32. http://dx.doi.org/10.1109/mm.2015.41.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

To, Chun-Hao, Eduardo Rozo, Elisabeth Krause, Hao-Yi Wu, Risa H. Wechsler e Andrés N. Salcedo. "LINNA: Likelihood Inference Neural Network Accelerator". Journal of Cosmology and Astroparticle Physics 2023, n.º 01 (1 de janeiro de 2023): 016. http://dx.doi.org/10.1088/1475-7516/2023/01/016.

Texto completo da fonte
Resumo:
Abstract Bayesian posterior inference of modern multi-probe cosmological analyses incurs massive computational costs. For instance, depending on the combinations of probes, a single posterior inference for the Dark Energy Survey (DES) data had a wall-clock time that ranged from 1 to 21 days using a state-of-the-art computing cluster with 100 cores. These computational costs have severe environmental impacts and the long wall-clock time slows scientific productivity. To address these difficulties, we introduce LINNA: the Likelihood Inference Neural Network Accelerator. Relative to the baseline DES analyses, LINNA reduces the computational cost associated with posterior inference by a factor of 8–50. If applied to the first-year cosmological analysis of Rubin Observatory's Legacy Survey of Space and Time (LSST Y1), we conservatively estimate that LINNA will save more than U.S. $300,000 on energy costs, while simultaneously reducing CO2 emission by 2,400 tons. To accomplish these reductions, LINNA automatically builds training data sets, creates neural network emulators, and produces a Markov chain that samples the posterior. We explicitly verify that LINNA accurately reproduces the first-year DES (DES Y1) cosmological constraints derived from a variety of different data vectors with our default code settings, without needing to retune the algorithm every time. Further, we find that LINNA is sufficient for enabling accurate and efficient sampling for LSST Y10 multi-probe analyses. We make LINNA publicly available at https://github.com/chto/linna, to enable others to perform fast and accurate posterior inference in contemporary cosmological analyses.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Liang, Yong, Junwen Tan, Zhisong Xie, Zetao Chen, Daoqian Lin e Zhenhao Yang. "Research on Convolutional Neural Network Inference Acceleration and Performance Optimization for Edge Intelligence". Sensors 24, n.º 1 (31 de dezembro de 2023): 240. http://dx.doi.org/10.3390/s24010240.

Texto completo da fonte
Resumo:
In recent years, edge intelligence (EI) has emerged, combining edge computing with AI, and specifically deep learning, to run AI algorithms directly on edge devices. In practical applications, EI faces challenges related to computational power, power consumption, size, and cost, with the primary challenge being the trade-off between computational power and power consumption. This has rendered traditional computing platforms unsustainable, making heterogeneous parallel computing platforms a crucial pathway for implementing EI. In our research, we leveraged the Xilinx Zynq 7000 heterogeneous computing platform, employed high-level synthesis (HLS) for design, and implemented two different accelerators for LeNet-5 using loop unrolling and pipelining optimization techniques. The experimental results show that when running at a clock speed of 100 MHz, the PIPELINE accelerator, compared to the UNROLL accelerator, experiences an 8.09% increase in power consumption but speeds up by 14.972 times, making the PIPELINE accelerator superior in performance. Compared to the CPU, the PIPELINE accelerator reduces power consumption by 91.37% and speeds up by 70.387 times, while compared to the GPU, it reduces power consumption by 93.35%. This study provides two different optimization schemes for edge intelligence applications through design and experimentation and demonstrates the impact of different quantization methods on FPGA resource consumption. These experimental results can provide a reference for practical applications, thereby providing a reference hardware acceleration scheme for edge intelligence applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Liu, Yang, Yiheng Zhang, Xiaoran Hao, Lan Chen, Mao Ni, Ming Chen e Rong Chen. "Design of a Convolutional Neural Network Accelerator Based on On-Chip Data Reordering". Electronics 13, n.º 5 (4 de março de 2024): 975. http://dx.doi.org/10.3390/electronics13050975.

Texto completo da fonte
Resumo:
Convolutional neural networks have been widely applied in the field of computer vision. In convolutional neural networks, convolution operations account for more than 90% of the total computational workload. The current mainstream approach to achieving high energy-efficient convolution operations is through dedicated hardware accelerators. Convolution operations involve a significant amount of weights and input feature data. Due to limited on-chip cache space in accelerators, there is a significant amount of off-chip DRAM memory access involved in the computation process. The latency of DRAM access is 20 times higher than that of SRAM, and the energy consumption of DRAM access is 100 times higher than that of multiply–accumulate (MAC) units. It is evident that the “memory wall” and “power wall” issues in neural network computation remain challenging. This paper presents the design of a hardware accelerator for convolutional neural networks. It employs a dataflow optimization strategy based on on-chip data reordering. This strategy improves on-chip data utilization and reduces the frequency of data exchanges between on-chip cache and off-chip DRAM. The experimental results indicate that compared to the accelerator without this strategy, it can reduce data exchange frequency by up to 82.9%.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Ro, Yuhwan, Eojin Lee e Jung Ahn. "Evaluating the Impact of Optical Interconnects on a Multi-Chip Machine-Learning Architecture". Electronics 7, n.º 8 (27 de julho de 2018): 130. http://dx.doi.org/10.3390/electronics7080130.

Texto completo da fonte
Resumo:
Following trends that emphasize neural networks for machine learning, many studies regarding computing systems have focused on accelerating deep neural networks. These studies often propose utilizing the accelerator specialized in a neural network and the cluster architecture composed of interconnected accelerator chips. We observed that inter-accelerator communication within a cluster has a significant impact on the training time of the neural network. In this paper, we show the advantages of optical interconnects for multi-chip machine-learning architecture by demonstrating performance improvements through replacing electrical interconnects with optical ones in an existing multi-chip system. We propose to use highly practical optical interconnect implementation and devise an arithmetic performance model to fairly assess the impact of optical interconnects on a machine-learning accelerator platform. In our evaluation of nine Convolutional Neural Networks with various input sizes, 100 and 400 Gbps optical interconnects reduce the training time by an average of 20.6% and 35.6%, respectively, compared to the baseline system with 25.6 Gbps electrical ones.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Huang, Hongmin, Zihao Liu, Taosheng Chen, Xianghong Hu, Qiming Zhang e Xiaoming Xiong. "Design Space Exploration for YOLO Neural Network Accelerator". Electronics 9, n.º 11 (16 de novembro de 2020): 1921. http://dx.doi.org/10.3390/electronics9111921.

Texto completo da fonte
Resumo:
The You Only Look Once (YOLO) neural network has great advantages and extensive applications in computer vision. The convolutional layers are the most important part of the neural network and take up most of the computation time. Improving the efficiency of the convolution operations can greatly increase the speed of the neural network. Field programmable gate arrays (FPGAs) have been widely used in accelerators for convolutional neural networks (CNNs) thanks to their configurability and parallel computing. This paper proposes a design space exploration for the YOLO neural network based on FPGA. A data block transmission strategy is proposed and a multiply and accumulate (MAC) design, which consists of two 14 × 14 processing element (PE) matrices, is designed. The PE matrices are configurable for different CNNs according to the given required functions. In order to take full advantage of the limited logical resources and the memory bandwidth on the given FPGA device and to simultaneously achieve the best performance, an improved roofline model is used to evaluate the hardware design to balance the computing throughput and the memory bandwidth requirement. The accelerator achieves 41.99 giga operations per second (GOPS) and consumes 7.50 W running at the frequency of 100 MHz on the Xilinx ZC706 board.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Chen, Zhimei. "Hardware Accelerated Optimization of Deep Learning Model on Artificial Intelligence Chip". Frontiers in Computing and Intelligent Systems 6, n.º 2 (15 de dezembro de 2023): 11–14. http://dx.doi.org/10.54097/fcis.v6i2.03.

Texto completo da fonte
Resumo:
With the rapid development of deep learning technology, the demand for computing resources is increasing, and the accelerated optimization of hardware on artificial intelligence (AI) chip has become one of the key ways to solve this challenge. This paper aims to explore the hardware acceleration optimization strategy of deep learning model on AI chip to improve the training and inference performance of the model. In this paper, the method and practice of optimizing deep learning model on AI chip are deeply analyzed by comprehensively considering the hardware characteristics such as parallel processing ability, energy-efficient computing, neural network accelerator, flexibility and programmability, high integration and heterogeneous computing structure. By designing and implementing an efficient convolution accelerator, the computational efficiency of the model is improved. The introduction of energy-efficient computing effectively reduces energy consumption, which provides feasibility for the practical application of mobile devices and embedded systems. At the same time, the optimization design of neural network accelerator becomes the core of hardware acceleration, and deep learning calculation such as convolution and matrix operation are accelerated through special hardware structure, which provides strong support for the real-time performance of the model. By analyzing the actual application cases of hardware accelerated optimization in different application scenarios, this paper highlights the key role of hardware accelerated optimization in improving the performance of deep learning model. Hardware accelerated optimization not only improves the computing efficiency, but also provides efficient and intelligent computing support for AI applications in different fields.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Brennsteiner, Stefan, Tughrul Arslan, John Thompson e Andrew McCormick. "A Real-Time Deep Learning OFDM Receiver". ACM Transactions on Reconfigurable Technology and Systems 15, n.º 3 (30 de setembro de 2022): 1–25. http://dx.doi.org/10.1145/3494049.

Texto completo da fonte
Resumo:
Machine learning in the physical layer of communication systems holds the potential to improve performance and simplify design methodology. Many algorithms have been proposed; however, the model complexity is often unfeasible for real-time deployment. The real-time processing capability of these systems has not been proven yet. In this work, we propose a novel, less complex, fully connected neural network to perform channel estimation and signal detection in an orthogonal frequency division multiplexing system. The memory requirement, which is often the bottleneck for fully connected neural networks, is reduced by ≈ 27 times by applying known compression techniques in a three-step training process. Extensive experiments were performed for pruning and quantizing the weights of the neural network detector. Additionally, Huffman encoding was used on the weights to further reduce memory requirements. Based on this approach, we propose the first field-programmable gate array based, real-time capable neural network accelerator, specifically designed to accelerate the orthogonal frequency division multiplexing detector workload. The accelerator is synthesized for a Xilinx RFSoC field-programmable gate array, uses small-batch processing to increase throughput, efficiently supports branching neural networks, and implements superscalar Huffman decoders.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Cho, Mannhee, e Youngmin Kim. "FPGA-Based Convolutional Neural Network Accelerator with Resource-Optimized Approximate Multiply-Accumulate Unit". Electronics 10, n.º 22 (19 de novembro de 2021): 2859. http://dx.doi.org/10.3390/electronics10222859.

Texto completo da fonte
Resumo:
Convolutional neural networks (CNNs) are widely used in modern applications for their versatility and high classification accuracy. Field-programmable gate arrays (FPGAs) are considered to be suitable platforms for CNNs based on their high performance, rapid development, and reconfigurability. Although many studies have proposed methods for implementing high-performance CNN accelerators on FPGAs using optimized data types and algorithm transformations, accelerators can be optimized further by investigating more efficient uses of FPGA resources. In this paper, we propose an FPGA-based CNN accelerator using multiple approximate accumulation units based on a fixed-point data type. We implemented the LeNet-5 CNN architecture, which performs classification of handwritten digits using the MNIST handwritten digit dataset. The proposed accelerator was implemented, using a high-level synthesis tool on a Xilinx FPGA. The proposed accelerator applies an optimized fixed-point data type and loop parallelization to improve performance. Approximate operation units are implemented using FPGA logic resources instead of high-precision digital signal processing (DSP) blocks, which are inefficient for low-precision data. Our accelerator model achieves 66% less memory usage and approximately 50% reduced network latency, compared to a floating point design and its resource utilization is optimized to use 78% fewer DSP blocks, compared to general fixed-point designs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Choubey, Abhishek, e Shruti Bhargava Choubey. "A Promising Hardware Accelerator with PAST Adder". Advances in Science and Technology 105 (abril de 2021): 241–48. http://dx.doi.org/10.4028/www.scientific.net/ast.105.241.

Texto completo da fonte
Resumo:
Recent neural network research has demonstrated a significant benefit in machine learning compared to conventional algorithms based on handcrafted models and features. In regions such as video, speech and image recognition, the neural network is now widely adopted. But the high complexity of neural network inference in computation and storage poses great differences on its application. These networks are computer-intensive algorithms that currently require the execution of dedicated hardware. In this case, we point out the difficulty of Adders (MOAs) and their high-resource utilization in a CNN implementation of FPGA .to address these challenge a parallel self-time adder is implemented which mainly aims at minimizing the amount of transistors and estimating different factors for PASTA, i.e. field, power, delay.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

de Sousa, André L., Mário P. Véstias e Horácio C. Neto. "Multi-Model Inference Accelerator for Binary Convolutional Neural Networks". Electronics 11, n.º 23 (30 de novembro de 2022): 3966. http://dx.doi.org/10.3390/electronics11233966.

Texto completo da fonte
Resumo:
Binary convolutional neural networks (BCNN) have shown good accuracy for small to medium neural network models. Their extreme quantization of weights and activations reduces off-chip data transfer and greatly reduces the computational complexity of convolutions. Further reduction in the complexity of a BCNN model for fast execution can be achieved with model size reduction at the cost of network accuracy. In this paper, a multi-model inference technique is proposed to reduce the execution time of the binarized inference process without accuracy reduction. The technique considers a cascade of neural network models with different computation/accuracy ratios. A parameterizable binarized neural network with different trade-offs between complexity and accuracy is used to obtain multiple network models. We also propose a hardware accelerator to run multi-model inference throughput in embedded systems. The multi-model inference accelerator is demonstrated on low-density Zynq-7010 and Zynq-7020 FPGA devices, classifying images from the CIFAR-10 dataset. The proposed accelerator improves the frame rate per number of LUTs by 7.2× those of previous solutions on a ZYNQ7020 FPGA with similar accuracy. This shows the effectiveness of the multi-model inference technique and the efficiency of the proposed hardware accelerator.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Kang, Soongyu, Seongjoo Lee e Yunho Jung. "Design of Network-on-Chip-Based Restricted Coulomb Energy Neural Network Accelerator on FPGA Device". Sensors 24, n.º 6 (15 de março de 2024): 1891. http://dx.doi.org/10.3390/s24061891.

Texto completo da fonte
Resumo:
Sensor applications in internet of things (IoT) systems, coupled with artificial intelligence (AI) technology, are becoming an increasingly significant part of modern life. For low-latency AI computation in IoT systems, there is a growing preference for edge-based computing over cloud-based alternatives. The restricted coulomb energy neural network (RCE-NN) is a machine learning algorithm well-suited for implementation on edge devices due to its simple learning and recognition scheme. In addition, because the RCE-NN generates neurons as needed, it is easy to adjust the network structure and learn additional data. Therefore, the RCE-NN can provide edge-based real-time processing for various sensor applications. However, previous RCE-NN accelerators have limited scalability when the number of neurons increases. In this paper, we propose a network-on-chip (NoC)-based RCE-NN accelerator and present the results of implementation on a field-programmable gate array (FPGA). NoC is an effective solution for managing massive interconnections. The proposed RCE-NN accelerator utilizes a hierarchical–star (H–star) topology, which efficiently handles a large number of neurons, along with routers specifically designed for the RCE-NN. These approaches result in only a slight decrease in the maximum operating frequency as the number of neurons increases. Consequently, the maximum operating frequency of the proposed RCE-NN accelerator with 512 neurons increased by 126.1% compared to a previous RCE-NN accelerator. This enhancement was verified with two datasets for gas and sign language recognition, achieving accelerations of up to 54.8% in learning time and up to 45.7% in recognition time. The NoC scheme of the proposed RCE-NN accelerator is an appropriate solution to ensure the scalability of the neural network while providing high-performance on-chip learning and recognition.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Cosatto, E., e H. P. Craf. "A neural network accelerator for image analysis". IEEE Micro 15, n.º 3 (junho de 1995): 32–38. http://dx.doi.org/10.1109/40.387680.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Kuznar, Damian, Robert Szczygiel, Piotr Maj e Anna Kozioł. "Design of artificial neural network hardware accelerator". Journal of Instrumentation 18, n.º 04 (1 de abril de 2023): C04013. http://dx.doi.org/10.1088/1748-0221/18/04/c04013.

Texto completo da fonte
Resumo:
Abstract We present a design of the scalable processor capable of providing an artificial neural network (ANN) functionality and in-house developed tools for automatic conversion of an ANN model designed with the TensorFlow library into an HDL code. The hardware is described in SystemVerilog and the synthesized module of the processor can perform calculations of a neural network with the speed exceeding 100 MHz. Our in-house designed software tool for ANN conversion supports translation of an arbitrary multilayer perceptron neural network into a state machine module, which performs necessary calculations. It is also dynamically reconfigurable so that the ANN operating on the hardware can be changed after it is deployed as an ASIC. The project aims the in-pixel implementation towards an X-ray photon energy estimation. The energy estimation shall be delivered with accuracy that exceeds the accuracy of an ADC converter that feeds the ANN with data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Wang, Yuejiao, Zhong Ma e Zunming Yang. "Sequential Characteristics Based Operators Disassembly Quantization Method for LSTM Layers". Applied Sciences 12, n.º 24 (12 de dezembro de 2022): 12744. http://dx.doi.org/10.3390/app122412744.

Texto completo da fonte
Resumo:
Embedded computing platforms such as neural network accelerators deploying neural network models need to quantize the values into low-bit integers through quantization operations. However, most current embedded computing platforms with a fixed-point architecture do not directly support performing the quantization operation for the LSTM layer. Meanwhile, the influence of sequential input data for LSTM has not been taken into account by quantization algorithms. Aiming at these two technical bottlenecks, a new sequential-characteristics-based operators disassembly quantization method for LSTM layers is proposed. Specifically, the calculation process of the LSTM layer is split into multiple regular layers supported by the neural network accelerator. The quantization-parameter-generation process is designed as a sequential-characteristics-based combination strategy for sequential and diverse image groups. Therefore, LSTM is converted into multiple mature operators for single-layer quantization and deployed on the neural network accelerator. Comparison experiments with the state of the art show that the proposed quantization method has comparable or even better performance than the full-precision baseline in the field of character-/word-level language prediction and image classification applications. The proposed method has strong application potential in the subsequent addition of novel operators for future neural network accelerators.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Seto, Kenshu. "A Survey on System-Level Design of Neural Network Accelerators". Journal of Integrated Circuits and Systems 16, n.º 2 (18 de agosto de 2021): 1–10. http://dx.doi.org/10.29292/jics.v16i2.505.

Texto completo da fonte
Resumo:
In this paper, we present a brief survey on the system-level optimizations used for convolutional neural network (CNN) inference accelerators. For the nested loop of convolutional (CONV) layers, we discuss the effects of loop optimizations such as loop interchange, tiling, unrolling and fusion on CNN accelerators. We also explain memory optimizations that are effective with the loop optimizations. In addition, we discuss streaming architectures and single computation engine architectures that are commonly used in CNN accelerators. Optimizations for CNN models are briefly explained, followed by the recent trends and future directions of the CNN accelerator design.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Park, Sang-Soo, e Ki-Seok Chung. "CONNA: Configurable Matrix Multiplication Engine for Neural Network Acceleration". Electronics 11, n.º 15 (29 de julho de 2022): 2373. http://dx.doi.org/10.3390/electronics11152373.

Texto completo da fonte
Resumo:
Convolutional neural networks (CNNs) have demonstrated promising results in various applications such as computer vision, speech recognition, and natural language processing. One of the key computations in many CNN applications is matrix multiplication, which accounts for a significant portion of computation. Therefore, hardware accelerators to effectively speed up the computation of matrix multiplication have been proposed, and several studies have attempted to design hardware accelerators to perform better matrix multiplications in terms of both speed and power consumption. Typically, accelerators with either a two-dimensional (2D) systolic array structure or a single instruction multiple data (SIMD) architecture are effective only when the input matrix has shapes that are close to or similar to a square. However, several CNN applications require multiplications of non-squared matrices with various shapes and dimensions, and such irregular shapes lead to poor utilization efficiency of the processing elements (PEs). This study proposes a configurable engine for neural network acceleration, called CONNA, whose computation engine can conduct matrix multiplications with highly utilized computing units, regardless of the access patterns, shapes, and dimensions of the input matrices by changing the shape of matrix multiplication conducted in the physical array. To verify the functionality of the CONNA accelerator, we implemented CONNA as an SoC platform that integrates a RISC-V MCU with CONNA on Xilinx VC707 FPGA. SqueezeNet on CONNA achieved an inference performance of 100 frames per second (FPS) with 2.36 mm2 and 83.55 mW in a 65 nm process, improving efficiency by up to 34.1 times better than existing accelerators in terms of FPS, silicon area, and power consumption.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Wang, Hongzhe, Junjie Wang, Hao Hu, Guo Li, Shaogang Hu, Qi Yu, Zhen Liu, Tupei Chen, Shijie Zhou e Yang Liu. "Ultra-High-Speed Accelerator Architecture for Convolutional Neural Network Based on Processing-in-Memory Using Resistive Random Access Memory". Sensors 23, n.º 5 (21 de fevereiro de 2023): 2401. http://dx.doi.org/10.3390/s23052401.

Texto completo da fonte
Resumo:
Processing-in-Memory (PIM) based on Resistive Random Access Memory (RRAM) is an emerging acceleration architecture for artificial neural networks. This paper proposes an RRAM PIM accelerator architecture that does not use Analog-to-Digital Converters (ADCs) and Digital-to-Analog Converters (DACs). Additionally, no additional memory usage is required to avoid the need for a large amount of data transportation in convolution computation. Partial quantization is introduced to reduce the accuracy loss. The proposed architecture can substantially reduce the overall power consumption and accelerate computation. The simulation results show that the image recognition rate for the Convolutional Neural Network (CNN) algorithm can reach 284 frames per second at 50 MHz using this architecture. The accuracy of the partial quantization remains almost unchanged compared to the algorithm without quantization.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Gowda, Kavitha Malali Vishveshwarappa, Sowmya Madhavan, Stefano Rinaldi, Parameshachari Bidare Divakarachari e Anitha Atmakur. "FPGA-Based Reconfigurable Convolutional Neural Network Accelerator Using Sparse and Convolutional Optimization". Electronics 11, n.º 10 (22 de maio de 2022): 1653. http://dx.doi.org/10.3390/electronics11101653.

Texto completo da fonte
Resumo:
Nowadays, the data flow architecture is considered as a general solution for the acceleration of a deep neural network (DNN) because of its higher parallelism. However, the conventional DNN accelerator offers only a restricted flexibility for diverse network models. In order to overcome this, a reconfigurable convolutional neural network (RCNN) accelerator, i.e., one of the DNN, is required to be developed over the field-programmable gate array (FPGA) platform. In this paper, the sparse optimization of weight (SOW) and convolutional optimization (CO) are proposed to improve the performances of the RCNN accelerator. The combination of SOW and CO is used to optimize the feature map and weight sizes of the RCNN accelerator; therefore, the hardware resources consumed by this RCNN are minimized in FPGA. The performances of RCNN-SOW-CO are analyzed by means of feature map size, weight size, sparseness of the input feature map (IFM), weight parameter proportion, block random access memory (BRAM), digital signal processing (DSP) elements, look-up tables (LUTs), slices, delay, power, and accuracy. An existing architectures OIDSCNN, LP-CNN, and DPR-NN are used to justify efficiency of the RCNN-SOW-CO. The LUT of RCNN-SOW-CO with Alexnet designed in the Zynq-7020 is 5150, which is less than the OIDSCNN and DPR-NN.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

C., Dr Aarthi, e Kowsalya S. "Feed Forward Neural Network With Column-Wise Matrix–Vector Multiplication on FPGAs". International Research Journal of Computer Science 11, n.º 04 (5 de abril de 2024): 355–59. http://dx.doi.org/10.26562/irjcs.2024.v1104.42.

Texto completo da fonte
Resumo:
This article presents a reconfigurable accelerator for recurrent neural networks with fine grained column wise matrix vector multiplication (RENOWN).We propose a novel latency hiding architecture for recurrent neural network accelerator using column wise matrix vector multiplication instead of the state of the art row wise operation. This hardware (HW) architecture can eliminate data dependencies to improve the throughput of RNN inference systems. Besides, we introduce a configurable checkerboard tiling strategy which allows large weight matrices, while incorporating various configurations of element-based parallelism (EP) and vector-based parallelism (VP). These optimizations improve the exploitation of parallelism to increase HW utilization and enhance system throughput. Evaluation results show that our design can achieve over 29.6 tera operations per second (TOPS) which would be among the highest for field-programmable gate array (FPGA)-based RNN designs. Compared to state-of-the-art accelerators on FPGAs, our design achieves 3.7–14.8 times better performance and has the highest HW.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Hou, Jia, Zichu Liu, Zepeng Yang e Chen Yang. "Hardware Trojan Attacks on the Reconfigurable Interconnections of Field-Programmable Gate Array-Based Convolutional Neural Network Accelerators and a Physically Unclonable Function-Based Countermeasure Detection Technique". Micromachines 15, n.º 1 (19 de janeiro de 2024): 149. http://dx.doi.org/10.3390/mi15010149.

Texto completo da fonte
Resumo:
Convolutional neural networks (CNNs) have demonstrated significant superiority in modern artificial intelligence (AI) applications. To accelerate the inference process of CNNs, reconfigurable CNN accelerators that support diverse networks are widely employed for AI systems. Given the ubiquitous deployment of these AI systems, there is a growing concern regarding the security of CNN accelerators and the potential attacks they may face, including hardware Trojans. This paper proposes a hardware Trojan designed to attack a crucial component of FPGA-based CNN accelerators: the reconfigurable interconnection network. Specifically, the hardware Trojan alters the data paths during activation, resulting in incorrect connections in the arithmetic circuit and consequently causing erroneous convolutional computations. To address this issue, the paper introduces a novel detection technique based on physically unclonable functions (PUFs) to safeguard the reconfigurable interconnection network against hardware Trojan attacks. Experimental results demonstrate that by incorporating a mere 0.27% hardware overhead to the accelerator, the proposed hardware Trojan can degrade the inference accuracy of popular neural network architectures, including LeNet, AlexNet, and VGG, by a significant range of 8.93% to 86.20%. The implemented arbiter-PUF circuit on a Xilinx Zynq XC7Z100 platform successfully detects the presence and location of hardware Trojans in a reconfigurable interconnection network. This research highlights the vulnerability of reconfigurable CNN accelerators to hardware Trojan attacks and proposes a promising detection technique to mitigate potential security risks. The findings underscore the importance of addressing hardware security concerns in the design and deployment of AI systems utilizing FPGA-based CNN accelerators.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Deng, Bao, e Hao Lv. "Research on Dynamic Reconfigurable Convolutional Neural Network Accelerator". Journal of Physics: Conference Series 1952, n.º 3 (1 de junho de 2021): 032045. http://dx.doi.org/10.1088/1742-6596/1952/3/032045.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

YAN, Jiale, Ying ZHANG, Fengbin TU, Jianxun YANG, Shixuan ZHENG, Peng OUYANG, Leibo LIU, Yuan XIE, Shaojun WEI e Shouyi YIN. "Research on low-power neural network computing accelerator". SCIENTIA SINICA Informationis 49, n.º 3 (1 de março de 2019): 314–33. http://dx.doi.org/10.1360/n112018-00282.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Kim, Doo Young, Jin Min Kim, Hakbeom Jang, Jinkyu Jeong e Jae W. Lee. "A neural network accelerator for mobile application processors". IEEE Transactions on Consumer Electronics 61, n.º 4 (novembro de 2015): 555–63. http://dx.doi.org/10.1109/tce.2015.7389812.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Gao, Chang, Antonio Rios-Navarro, Xi Chen, Shih-Chii Liu e Tobi Delbruck. "EdgeDRNN: Recurrent Neural Network Accelerator for Edge Inference". IEEE Journal on Emerging and Selected Topics in Circuits and Systems 10, n.º 4 (dezembro de 2020): 419–32. http://dx.doi.org/10.1109/jetcas.2020.3040300.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia