Gotowa bibliografia na temat „Binary neural networks (BNN)”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Binary neural networks (BNN)”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Binary neural networks (BNN)"

1

Rozen, Tal, Moshe Kimhi, Brian Chmiel, Avi Mendelson, and Chaim Baskin. "Bimodal-Distributed Binarized Neural Networks." Mathematics 10, no. 21 (2022): 4107. http://dx.doi.org/10.3390/math10214107.

Pełny tekst źródła
Streszczenie:
Binary neural networks (BNNs) are an extremely promising method for reducing deep neural networks’ complexity and power consumption significantly. Binarization techniques, however, suffer from ineligible performance degradation compared to their full-precision counterparts. Prior work mainly focused on strategies for sign function approximation during the forward and backward phases to reduce the quantization error during the binarization process. In this work, we propose a bimodal-distributed binarization method (BD-BNN). The newly proposed technique aims to impose a bimodal distribution of t
Style APA, Harvard, Vancouver, ISO itp.
2

Cho, Jaechan, Yongchul Jung, Seongjoo Lee, and Yunho Jung. "Reconfigurable Binary Neural Network Accelerator with Adaptive Parallelism Scheme." Electronics 10, no. 3 (2021): 230. http://dx.doi.org/10.3390/electronics10030230.

Pełny tekst źródła
Streszczenie:
Binary neural networks (BNNs) have attracted significant interest for the implementation of deep neural networks (DNNs) on resource-constrained edge devices, and various BNN accelerator architectures have been proposed to achieve higher efficiency. BNN accelerators can be divided into two categories: streaming and layer accelerators. Although streaming accelerators designed for a specific BNN network topology provide high throughput, they are infeasible for various sensor applications in edge AI because of their complexity and inflexibility. In contrast, layer accelerators with reasonable reso
Style APA, Harvard, Vancouver, ISO itp.
3

Sunny, Febin P., Asif Mirza, Mahdi Nikdast, and Sudeep Pasricha. "ROBIN: A Robust Optical Binary Neural Network Accelerator." ACM Transactions on Embedded Computing Systems 20, no. 5s (2021): 1–24. http://dx.doi.org/10.1145/3476988.

Pełny tekst źródła
Streszczenie:
Domain specific neural network accelerators have garnered attention because of their improved energy efficiency and inference performance compared to CPUs and GPUs. Such accelerators are thus well suited for resource-constrained embedded systems. However, mapping sophisticated neural network models on these accelerators still entails significant energy and memory consumption, along with high inference time overhead. Binarized neural networks (BNNs), which utilize single-bit weights, represent an efficient way to implement and deploy neural network models on accelerators. In this paper, we pres
Style APA, Harvard, Vancouver, ISO itp.
4

Simons, Taylor, and Dah-Jye Lee. "A Review of Binarized Neural Networks." Electronics 8, no. 6 (2019): 661. http://dx.doi.org/10.3390/electronics8060661.

Pełny tekst źródła
Streszczenie:
In this work, we review Binarized Neural Networks (BNNs). BNNs are deep neural networks that use binary values for activations and weights, instead of full precision values. With binary values, BNNs can execute computations using bitwise operations, which reduces execution time. Model sizes of BNNs are much smaller than their full precision counterparts. While the accuracy of a BNN model is generally less than full precision models, BNNs have been closing accuracy gap and are becoming more accurate on larger datasets like ImageNet. BNNs are also good candidates for deep learning implementation
Style APA, Harvard, Vancouver, ISO itp.
5

Wang, Peisong, Xiangyu He, Gang Li, Tianli Zhao, and Jian Cheng. "Sparsity-Inducing Binarized Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 12192–99. http://dx.doi.org/10.1609/aaai.v34i07.6900.

Pełny tekst źródła
Streszczenie:
Binarization of feature representation is critical for Binarized Neural Networks (BNNs). Currently, sign function is the commonly used method for feature binarization. Although it works well on small datasets, the performance on ImageNet remains unsatisfied. Previous methods mainly focus on minimizing quantization error, improving the training strategies and decomposing each convolution layer into several binary convolution modules. However, whether sign is the only option for binarization has been largely overlooked. In this work, we propose the Sparsity-inducing Binarized Neural Network (Si-
Style APA, Harvard, Vancouver, ISO itp.
6

Liu, Chunlei, Peng Chen, Bohan Zhuang, Chunhua Shen, Baochang Zhang, and Wenrui Ding. "SA-BNN: State-Aware Binary Neural Network." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (2021): 2091–99. http://dx.doi.org/10.1609/aaai.v35i3.16306.

Pełny tekst źródła
Streszczenie:
Binary Neural Networks (BNNs) have received significant attention due to the memory and computation efficiency recently. However, the considerable accuracy gap between BNNs and their full-precision counterparts hinders BNNs to be deployed to resource-constrained platforms. One of the main reasons for the performance gap can be attributed to the frequent weight flip, which is caused by the misleading weight update in BNNs. To address this issue, we propose a state-aware binary neural network (SA-BNN) equipped with the well designed state-aware gradient. Our SA-BNN is inspired by the observation
Style APA, Harvard, Vancouver, ISO itp.
7

Zhao, Yiyang, Yongjia Wang, Ruibo Wang, Yuan Rong, and Xianyang Jiang. "A Highly Robust Binary Neural Network Inference Accelerator Based on Binary Memristors." Electronics 10, no. 21 (2021): 2600. http://dx.doi.org/10.3390/electronics10212600.

Pełny tekst źródła
Streszczenie:
Since memristor was found, it has shown great application potential in neuromorphic computing. Currently, most neural networks based on memristors deploy the special analog characteristics of memristor. However, owing to the limitation of manufacturing process, non-ideal characteristics such as non-linearity, asymmetry, and inconsistent device periodicity appear frequently and definitely, therefore, it is a challenge to employ memristor in a massive way. On the contrary, a binary neural network (BNN) requires its weights to be either +1 or −1, which can be mapped by digital memristors with hig
Style APA, Harvard, Vancouver, ISO itp.
8

Xiang, Maoyang, and Tee Hui Teo. "Implementation of Binarized Neural Networks in All-Programmable System-on-Chip Platforms." Electronics 11, no. 4 (2022): 663. http://dx.doi.org/10.3390/electronics11040663.

Pełny tekst źródła
Streszczenie:
The Binarized Neural Network (BNN) is a Convolutional Neural Network (CNN) consisting of binary weights and activation rather than real-value weights. Smaller models are used, allowing for inference effectively on mobile or embedded devices with limited power and computing capabilities. Nevertheless, binarization results in lower-entropy feature maps and gradient vanishing, which leads to a loss in accuracy compared to real-value networks. Previous research has addressed these issues with various approaches. However, those approaches significantly increase the algorithm’s time and space comple
Style APA, Harvard, Vancouver, ISO itp.
9

Zhang, Longlong, Xuebin Tang, Xiang Hu, Tong Zhou, and Yuanxi Peng. "FPGA-Based BNN Architecture in Time Domain with Low Storage and Power Consumption." Electronics 11, no. 9 (2022): 1421. http://dx.doi.org/10.3390/electronics11091421.

Pełny tekst źródła
Streszczenie:
With the increasing demand for convolutional neural networks (CNNs) in many edge computing scenarios and resource-limited settings, researchers have made efforts to apply lightweight neural networks on hardware platforms. While binarized neural networks (BNNs) perform excellently in such tasks, many implementations still face challenges such as an imbalance between accuracy and computational complexity, as well as the requirement for low power and storage consumption. This paper first proposes a novel binary convolution structure based on the time domain to reduce resource and power consumptio
Style APA, Harvard, Vancouver, ISO itp.
10

Kim, HyunJin, Mohammed Alnemari, and Nader Bagherzadeh. "A storage-efficient ensemble classification using filter sharing on binarized convolutional neural networks." PeerJ Computer Science 8 (March 29, 2022): e924. http://dx.doi.org/10.7717/peerj-cs.924.

Pełny tekst źródła
Streszczenie:
This paper proposes a storage-efficient ensemble classification to overcome the low inference accuracy of binary neural networks (BNNs). When external power is enough in a dynamic powered system, classification results can be enhanced by aggregating outputs of multiple BNN classifiers. However, memory requirements for storing multiple classifiers are a significant burden in the lightweight system. The proposed scheme shares the filters from a trained convolutional neural network (CNN) model to reduce storage requirements in the binarized CNNs instead of adopting the fully independent classifie
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!