Journal articles on the topic 'Binary neural networks (BNN)'

To see the other types of publications on this topic, follow the link: Binary neural networks (BNN).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Binary neural networks (BNN).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rozen, Tal, Moshe Kimhi, Brian Chmiel, Avi Mendelson, and Chaim Baskin. "Bimodal-Distributed Binarized Neural Networks." Mathematics 10, no. 21 (November 3, 2022): 4107. http://dx.doi.org/10.3390/math10214107.

Full text
Abstract:
Binary neural networks (BNNs) are an extremely promising method for reducing deep neural networks’ complexity and power consumption significantly. Binarization techniques, however, suffer from ineligible performance degradation compared to their full-precision counterparts. Prior work mainly focused on strategies for sign function approximation during the forward and backward phases to reduce the quantization error during the binarization process. In this work, we propose a bimodal-distributed binarization method (BD-BNN). The newly proposed technique aims to impose a bimodal distribution of the network weights by kurtosis regularization. The proposed method consists of a teacher–trainer training scheme termed weight distribution mimicking (WDM), which efficiently imitates the full-precision network weight distribution to their binary counterpart. Preserving this distribution during binarization-aware training creates robust and informative binary feature maps and thus it can significantly reduce the generalization error of the BNN. Extensive evaluations on CIFAR-10 and ImageNet demonstrate that our newly proposed BD-BNN outperforms current state-of-the-art schemes.
APA, Harvard, Vancouver, ISO, and other styles
2

Cho, Jaechan, Yongchul Jung, Seongjoo Lee, and Yunho Jung. "Reconfigurable Binary Neural Network Accelerator with Adaptive Parallelism Scheme." Electronics 10, no. 3 (January 20, 2021): 230. http://dx.doi.org/10.3390/electronics10030230.

Full text
Abstract:
Binary neural networks (BNNs) have attracted significant interest for the implementation of deep neural networks (DNNs) on resource-constrained edge devices, and various BNN accelerator architectures have been proposed to achieve higher efficiency. BNN accelerators can be divided into two categories: streaming and layer accelerators. Although streaming accelerators designed for a specific BNN network topology provide high throughput, they are infeasible for various sensor applications in edge AI because of their complexity and inflexibility. In contrast, layer accelerators with reasonable resources can support various network topologies, but they operate with the same parallelism for all the layers of the BNN, which degrades throughput performance at certain layers. To overcome this problem, we propose a BNN accelerator with adaptive parallelism that offers high throughput performance in all layers. The proposed accelerator analyzes target layer parameters and operates with optimal parallelism using reasonable resources. In addition, this architecture is able to fully compute all types of BNN layers thanks to its reconfigurability, and it can achieve a higher area–speed efficiency than existing accelerators. In performance evaluation using state-of-the-art BNN topologies, the designed BNN accelerator achieved an area–speed efficiency 9.69 times higher than previous FPGA implementations and 24% higher than existing VLSI implementations for BNNs.
APA, Harvard, Vancouver, ISO, and other styles
3

Sunny, Febin P., Asif Mirza, Mahdi Nikdast, and Sudeep Pasricha. "ROBIN: A Robust Optical Binary Neural Network Accelerator." ACM Transactions on Embedded Computing Systems 20, no. 5s (October 31, 2021): 1–24. http://dx.doi.org/10.1145/3476988.

Full text
Abstract:
Domain specific neural network accelerators have garnered attention because of their improved energy efficiency and inference performance compared to CPUs and GPUs. Such accelerators are thus well suited for resource-constrained embedded systems. However, mapping sophisticated neural network models on these accelerators still entails significant energy and memory consumption, along with high inference time overhead. Binarized neural networks (BNNs), which utilize single-bit weights, represent an efficient way to implement and deploy neural network models on accelerators. In this paper, we present a novel optical-domain BNN accelerator, named ROBIN , which intelligently integrates heterogeneous microring resonator optical devices with complementary capabilities to efficiently implement the key functionalities in BNNs. We perform detailed fabrication-process variation analyses at the optical device level, explore efficient corrective tuning for these devices, and integrate circuit-level optimization to counter thermal variations. As a result, our proposed ROBIN architecture possesses the desirable traits of being robust, energy-efficient, low latency, and high throughput, when executing BNN models. Our analysis shows that ROBIN can outperform the best-known optical BNN accelerators and many electronic accelerators. Specifically, our energy-efficient ROBIN design exhibits energy-per-bit values that are ∼4 × lower than electronic BNN accelerators and ∼933 × lower than a recently proposed photonic BNN accelerator, while a performance-efficient ROBIN design shows ∼3 × and ∼25 × better performance than electronic and photonic BNN accelerators, respectively.
APA, Harvard, Vancouver, ISO, and other styles
4

Simons, Taylor, and Dah-Jye Lee. "A Review of Binarized Neural Networks." Electronics 8, no. 6 (June 12, 2019): 661. http://dx.doi.org/10.3390/electronics8060661.

Full text
Abstract:
In this work, we review Binarized Neural Networks (BNNs). BNNs are deep neural networks that use binary values for activations and weights, instead of full precision values. With binary values, BNNs can execute computations using bitwise operations, which reduces execution time. Model sizes of BNNs are much smaller than their full precision counterparts. While the accuracy of a BNN model is generally less than full precision models, BNNs have been closing accuracy gap and are becoming more accurate on larger datasets like ImageNet. BNNs are also good candidates for deep learning implementations on FPGAs and ASICs due to their bitwise efficiency. We give a tutorial of the general BNN methodology and review various contributions, implementations and applications of BNNs.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Peisong, Xiangyu He, Gang Li, Tianli Zhao, and Jian Cheng. "Sparsity-Inducing Binarized Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12192–99. http://dx.doi.org/10.1609/aaai.v34i07.6900.

Full text
Abstract:
Binarization of feature representation is critical for Binarized Neural Networks (BNNs). Currently, sign function is the commonly used method for feature binarization. Although it works well on small datasets, the performance on ImageNet remains unsatisfied. Previous methods mainly focus on minimizing quantization error, improving the training strategies and decomposing each convolution layer into several binary convolution modules. However, whether sign is the only option for binarization has been largely overlooked. In this work, we propose the Sparsity-inducing Binarized Neural Network (Si-BNN), to quantize the activations to be either 0 or +1, which introduces sparsity into binary representation. We further introduce trainable thresholds into the backward function of binarization to guide the gradient propagation. Our method dramatically outperforms current state-of-the-arts, lowering the performance gap between full-precision networks and BNNs on mainstream architectures, achieving the new state-of-the-art on binarized AlexNet (Top-1 50.5%), ResNet-18 (Top-1 59.7%), and VGG-Net (Top-1 63.2%). At inference time, Si-BNN still enjoys the high efficiency of exclusive-not-or (xnor) operations.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Chunlei, Peng Chen, Bohan Zhuang, Chunhua Shen, Baochang Zhang, and Wenrui Ding. "SA-BNN: State-Aware Binary Neural Network." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (May 18, 2021): 2091–99. http://dx.doi.org/10.1609/aaai.v35i3.16306.

Full text
Abstract:
Binary Neural Networks (BNNs) have received significant attention due to the memory and computation efficiency recently. However, the considerable accuracy gap between BNNs and their full-precision counterparts hinders BNNs to be deployed to resource-constrained platforms. One of the main reasons for the performance gap can be attributed to the frequent weight flip, which is caused by the misleading weight update in BNNs. To address this issue, we propose a state-aware binary neural network (SA-BNN) equipped with the well designed state-aware gradient. Our SA-BNN is inspired by the observation that the frequent weight flip is more likely to occur, when the gradient magnitude for all quantization states {-1,1} is identical. Accordingly, we propose to employ independent gradient coefficients for different states when updating the weights. Furthermore, we also analyze the effectiveness of the state-aware gradient on suppressing the frequent weight flip problem. Experiments on ImageNet show that the proposed SA-BNN outperforms the current state-of-the-arts (e.g., Bi-Real Net) by more than 3% when using a ResNet architecture. Specifically, we achieve 61.7%, 65.5% and 68.7% Top-1 accuracy with ResNet-18, ResNet-34 and ResNet-50 on ImageNet, respectively.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhao, Yiyang, Yongjia Wang, Ruibo Wang, Yuan Rong, and Xianyang Jiang. "A Highly Robust Binary Neural Network Inference Accelerator Based on Binary Memristors." Electronics 10, no. 21 (October 25, 2021): 2600. http://dx.doi.org/10.3390/electronics10212600.

Full text
Abstract:
Since memristor was found, it has shown great application potential in neuromorphic computing. Currently, most neural networks based on memristors deploy the special analog characteristics of memristor. However, owing to the limitation of manufacturing process, non-ideal characteristics such as non-linearity, asymmetry, and inconsistent device periodicity appear frequently and definitely, therefore, it is a challenge to employ memristor in a massive way. On the contrary, a binary neural network (BNN) requires its weights to be either +1 or −1, which can be mapped by digital memristors with high technical maturity. Upon this, a highly robust BNN inference accelerator with binary sigmoid activation function is proposed. In the accelerator, the inputs of each network layer are either +1 or 0, which can facilitate feature encoding and reduce the peripheral circuit complexity of memristor hardware. The proposed two-column reference memristor structure together with current controlled voltage source (CCVS) circuit not only solves the problem of mapping positive and negative weights on memristor array, but also eliminates the sneak current effect under the minimum conductance status. Being compared to the traditional differential pair structure of BNN, the proposed two-column reference scheme can reduce both the number of memristors and the latency to refresh the memristor array by nearly 50%. The influence of non-ideal factors of memristor array such as memristor array yield, memristor conductance fluctuation, and reading noise on the accuracy of BNN is investigated in detail based on a newly memristor circuit model with non-ideal characteristics. The experimental results demonstrate that when the array yield α ≥ 5%, or the reading noise σ ≤ 0.25, a recognition accuracy greater than 97% on the MNIST data set is achieved.
APA, Harvard, Vancouver, ISO, and other styles
8

Xiang, Maoyang, and Tee Hui Teo. "Implementation of Binarized Neural Networks in All-Programmable System-on-Chip Platforms." Electronics 11, no. 4 (February 21, 2022): 663. http://dx.doi.org/10.3390/electronics11040663.

Full text
Abstract:
The Binarized Neural Network (BNN) is a Convolutional Neural Network (CNN) consisting of binary weights and activation rather than real-value weights. Smaller models are used, allowing for inference effectively on mobile or embedded devices with limited power and computing capabilities. Nevertheless, binarization results in lower-entropy feature maps and gradient vanishing, which leads to a loss in accuracy compared to real-value networks. Previous research has addressed these issues with various approaches. However, those approaches significantly increase the algorithm’s time and space complexity, which puts a heavy burden on those embedded devices. Therefore, a novel approach for BNN implementation on embedded systems with multi-scale BNN topology is proposed in this paper, from two optimization perspectives: hardware structure and BNN topology, that retains more low-level features throughout the feed-forward process with few operations. Experiments on the CIFAR-10 dataset indicate that the proposed method outperforms a number of current BNN designs in terms of efficiency and accuracy. Additionally, the proposed BNN was implemented on the All Programmable System on Chip (APSoC) with 4.4 W power consumption using the hardware accelerator.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Longlong, Xuebin Tang, Xiang Hu, Tong Zhou, and Yuanxi Peng. "FPGA-Based BNN Architecture in Time Domain with Low Storage and Power Consumption." Electronics 11, no. 9 (April 28, 2022): 1421. http://dx.doi.org/10.3390/electronics11091421.

Full text
Abstract:
With the increasing demand for convolutional neural networks (CNNs) in many edge computing scenarios and resource-limited settings, researchers have made efforts to apply lightweight neural networks on hardware platforms. While binarized neural networks (BNNs) perform excellently in such tasks, many implementations still face challenges such as an imbalance between accuracy and computational complexity, as well as the requirement for low power and storage consumption. This paper first proposes a novel binary convolution structure based on the time domain to reduce resource and power consumption for the convolution process. Furthermore, through the joint design of binary convolution, batch normalization, and activation function in the time domain, we propose a full-BNN model and hardware architecture (Model I), which keeps the values of all intermediate results as binary (1 bit) to reduce storage requirements by 75%. At the same time, we propose a mixed-precision BNN structure (model II) based on the sensitivity of different layers of the network to the calculation accuracy; that is, the layer sensitive to the classification result uses fixed-point data, and the other layers use binary data in the time domain. This can achieve a balance between accuracy and computing resources. Lastly, we take the MNIST dataset as an example to test the above two models on the field-programmable gate array (FPGA) platform. The results show that the two models can be used as neural network acceleration units with low storage requirements and low power consumption for classification tasks under the condition that the accuracy decline is small. The joint design method in the time domain may further inspire other computing architectures. In addition, the design of Model II has certain reference significance for the design of more complex classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, HyunJin, Mohammed Alnemari, and Nader Bagherzadeh. "A storage-efficient ensemble classification using filter sharing on binarized convolutional neural networks." PeerJ Computer Science 8 (March 29, 2022): e924. http://dx.doi.org/10.7717/peerj-cs.924.

Full text
Abstract:
This paper proposes a storage-efficient ensemble classification to overcome the low inference accuracy of binary neural networks (BNNs). When external power is enough in a dynamic powered system, classification results can be enhanced by aggregating outputs of multiple BNN classifiers. However, memory requirements for storing multiple classifiers are a significant burden in the lightweight system. The proposed scheme shares the filters from a trained convolutional neural network (CNN) model to reduce storage requirements in the binarized CNNs instead of adopting the fully independent classifier. While several filters are shared, the proposed method only trains unfrozen learnable parameters in the retraining step. We compare and analyze the performances of the proposed ensemble-based systems depending on various ensemble types and BNN structures on CIFAR datasets. Our experiments conclude that the proposed method using the filter sharing can be scalable with the number of classifiers and effective in enhancing classification accuracy. With binarized ResNet-20 and ReActNet-10 on the CIFAR-100 dataset, the proposed scheme can achieve 56.74% and 70.29% Top-1 accuracies with 10 BNN classifiers, which enhances performance by 7.6% and 3.6% compared with that using a single BNN classifier.
APA, Harvard, Vancouver, ISO, and other styles
11

Parmar, Vivek, Sandeep Kaur Kingra, Shubham Negi, and Manan Suri. "Analysis of VMM computation strategies to implement BNN applications on RRAM arrays." APL Machine Learning 1, no. 2 (June 1, 2023): 026108. http://dx.doi.org/10.1063/5.0139583.

Full text
Abstract:
The growing interest in edge-AI solutions and advances in the field of quantized neural networks have led to hardware efficient binary neural networks (BNNs). Extreme BNNs utilize only binary weights and activations, making them more memory efficient. Such networks can be realized using exclusive-NOR (XNOR) gates and popcount circuits. The analog in-memory realization of BNNs utilizing emerging non-volatile memory devices has been widely explored recently. However, most realizations typically use 2T-2R synapses, resulting in sub-optimal area utilization. In this study, we investigate alternate computation mapping strategies to realize BNN using selectorless resistive random access memory arrays. A new differential computation scheme that shows a comparable performance with the well-established XNOR computation strategy is proposed. Through extensive experimental characterization, BNN implementation using a non-filamentary bipolar oxide-based random access memory device-based crossbar is demonstrated for two datasets: (i) experimental characterization was performed on a thermal-image based Rock-Paper-Scissors dataset to analyze the impact of sneak-paths with real-hardware experiments. (ii) Large-scale BNN simulations on the Fashion-MNIST dataset with multi-level cell characteristics of non-filamentary devices are performed to demonstrate the impact of device non-idealities.
APA, Harvard, Vancouver, ISO, and other styles
12

Trinh Quang Kien. "Improving the robustness of binarized neural network using the EFAT method." Journal of Military Science and Technology, CSCE5 (December 15, 2021): 14–23. http://dx.doi.org/10.54939/1859-1043.j.mst.csce5.2021.14-23.

Full text
Abstract:
In recent years with the explosion of research in artificial intelligence, deep learning models based on convolutional neural networks (CNNs) are one of the promising architectures for practical applications thanks to their reasonably good achievable accuracy. However, CNNs characterized by convolutional layers often have a large number of parameters and computational workload, leading to large energy consumption for training and network inference. The binarized neural network (BNN) model has been recently proposed to overcome that drawback. The BNNs use binary representation for the inputs and weights, which inherently reduces memory requirements and simplifies computations while still maintaining acceptable accuracy. BNN thereby is very suited for the practical realization of Edge-AI application on resource- and energy-constrained devices such as embedded or mobile devices. As CNN and BNN both compose linear transformations layers, they can be fooled by adversarial attack patterns. This topic has been actively studied recently but most of them are for CNN. In this work, we examine the impact of the adversarial attack on BNNs and propose a solution to improve the accuracy of BNN against this type of attack. Specifically, we use an Enhanced Fast Adversarial Training (EFAT) method to train the network that helps the BNN be more robust against major adversarial attack models with a very short training time. Experimental results with Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attack models on our trained BNN network with MNIST dataset increased accuracy from 31.34% and 0.18% to 96.96% and 85.08%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
13

Jiang, Xinrui, Nannan Wang, Jingwei Xin, Keyu Li, Xi Yang, and Xinbo Gao. "Training Binary Neural Network without Batch Normalization for Image Super-Resolution." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1700–1707. http://dx.doi.org/10.1609/aaai.v35i2.16263.

Full text
Abstract:
Recently, binary neural network (BNN) based super-resolution (SR) methods have enjoyed initial success in the SR field. However, there is a noticeable performance gap between the binarized model and the full-precision one. Furthermore, the batch normalization (BN) in binary SR networks introduces floating-point calculations, which is unfriendly to low-precision hardwares. Therefore, there is still room for improvement in terms of model performance and efficiency. Focusing on this issue, in this paper, we first explore a novel binary training mechanism based on the feature distribution, allowing us to replace all BN layers with a simple training method. Then, we construct a strong baseline by combining the highlights of recent binarization methods, which already surpasses the state-of-the-arts. Next, to train highly accurate binarized SR model, we also develop a lightweight network architecture and a multi-stage knowledge distillation strategy to enhance the model representation ability. Extensive experiments demonstrate that the proposed method not only presents advantages of lower computation as compared to conventional floating-point networks but outperforms the state-of-the-art binary methods on the standard SR networks.
APA, Harvard, Vancouver, ISO, and other styles
14

Yu, Jie, Woyu Zhang, Danian Dong, Wenxuan Sun, Jinru Lai, Xu Zheng, Tiancheng Gong, et al. "Long-Term Accuracy Enhancement of Binary Neural Networks Based on Optimized Three-Dimensional Memristor Array." Micromachines 13, no. 2 (February 17, 2022): 308. http://dx.doi.org/10.3390/mi13020308.

Full text
Abstract:
In embedded neuromorphic Internet of Things (IoT) systems, it is critical to improve the efficiency of neural network (NN) edge devices in inferring a pretrained NN. Meanwhile, in the paradigm of edge computing, device integration, data retention characteristics and power consumption are particularly important. In this paper, the self-selected device (SSD), which is the base cell for building the densest three-dimensional (3D) architecture, is used to store non-volatile weights in binary neural networks (BNN) for embedded NN applications. Considering that the prevailing issues in written data retention on the device can affect the energy efficiency of the system’s operation, the data loss mechanism of the self-selected cell is elucidated. On this basis, we introduce an optimized method to retain oxygen ions and prevent their diffusion toward the switching layer by introducing a titanium interfacial layer. By using this optimization, the recombination probability of Vo and oxygen ions is reduced, effectively improving the retention characteristics of the device. The optimization effect is verified using a simulation after mapping the BNN weights to the 3D VRRAM array constructed by the SSD before and after optimization. The simulation results showed that the long-term recognition accuracy (greater than 105 s) of the pre-trained BNN was improved by 24% and that the energy consumption of the system during training can be reduced 25,000-fold while ensuring the same accuracy. This work provides high storage density and a non-volatile solution to meet the low power consumption and miniaturization requirements of embedded neuromorphic applications.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhu, Ganlin, Hongxiao Fei, Junkun Hong, Yueyi Luo, and Jun Long. "An Information-Reserved and Deviation-Controllable Binary Neural Network for Object Detection." Mathematics 11, no. 1 (December 24, 2022): 62. http://dx.doi.org/10.3390/math11010062.

Full text
Abstract:
Object detection is a fundamental task in computer vision, which is usually based on convolutional neural networks (CNNs). While it is difficult to be deployed in embedded devices due to the huge storage and computing consumptions, binary neural networks (BNNs) can execute object detection with limited resources. However, the extreme quantification in BNN causes diversity of feature representation loss, which eventually influences the object detection performance. In this paper, we propose a method balancing Information Retention and Deviation Control to achieve effective object detection, named IR-DC Net. On the one hand, we introduce the KL-Divergence to compose multiple entropy for maximizing the available information. On the other hand, we design a lightweight convolutional module to generate scale factors dynamically for minimizing the deviation between binary and real convolution. The experiments on PASCAL VOC, COCO2014, KITTI, and VisDrone datasets show that our method improved the accuracy in comparison with previous binary neural networks.
APA, Harvard, Vancouver, ISO, and other styles
16

Simons, Taylor, and Dah-Jye Lee. "Efficient Binarized Convolutional Layers for Visual Inspection Applications on Resource-Limited FPGAs and ASICs." Electronics 10, no. 13 (June 23, 2021): 1511. http://dx.doi.org/10.3390/electronics10131511.

Full text
Abstract:
There has been a recent surge in publications related to binarized neural networks (BNNs), which use binary values to represent both the weights and activations in deep neural networks (DNNs). Due to the bitwise nature of BNNs, there have been many efforts to implement BNNs on ASICs and FPGAs. While BNNs are excellent candidates for these kinds of resource-limited systems, most implementations still require very large FPGAs or CPU-FPGA co-processing systems. Our work focuses on reducing the computational cost of BNNs even further, making them more efficient to implement on FPGAs. We target embedded visual inspection tasks, like quality inspection sorting on manufactured parts and agricultural produce sorting. We propose a new binarized convolutional layer, called the neural jet features layer, that learns well-known classic computer vision kernels that are efficient to calculate as a group. We show that on visual inspection tasks, neural jet features perform comparably to standard BNN convolutional layers while using less computational resources. We also show that neural jet features tend to be more stable than BNN convolution layers when training small models.
APA, Harvard, Vancouver, ISO, and other styles
17

Xue, Ping, Yang Lu, Jingfei Chang, Xing Wei, and Zhen Wei. "Fast and Accurate Binary Neural Networks Based on Depth-Width Reshaping." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 10684–92. http://dx.doi.org/10.1609/aaai.v37i9.26268.

Full text
Abstract:
Network binarization (i.e., binary neural networks, BNNs) can efficiently compress deep neural networks and accelerate model inference but cause severe accuracy degradation. Existing BNNs are mainly implemented based on the commonly used full-precision network backbones, and then the accuracy is improved with various techniques. However, there is a question of whether the full-precision network backbone is well adapted to BNNs. We start from the factors of the performance degradation of BNNs and analyze the problems of directly using full-precision network backbones for BNNs: for a given computational budget, the backbone of a BNN may need to be shallower and wider compared to the backbone of a full-precision network. With this in mind, Depth-Width Reshaping (DWR) is proposed to reshape the depth and width of existing full-precision network backbones and further optimize them by incorporating pruning techniques to better fit the BNNs. Extensive experiments demonstrate the analytical result and the effectiveness of the proposed method. Compared with the original backbones, the DWR backbones constructed by the proposed method result in close to O(√s) decrease in activations, while achieving an absolute accuracy increase by up to 1.7% with comparable computational cost. Besides, by using the DWR backbones, existing methods can achieve new state-of-the-art (SOTA) accuracy (e.g., 67.2% on ImageNet with ResNet-18 as the original backbone). We hope this work provides a novel insight into the backbone design of BNNs. The code is available at https://github.com/pingxue-hfut/DWR.
APA, Harvard, Vancouver, ISO, and other styles
18

Xi, Jiazhen, and Hiroyuki Yamauchi. "A Layer-Wise Ensemble Technique for Binary Neural Network." International Journal of Pattern Recognition and Artificial Intelligence 35, no. 08 (March 5, 2021): 2152011. http://dx.doi.org/10.1142/s021800142152011x.

Full text
Abstract:
Binary neural networks (BNNs) have drawn much attention because of the most promising techniques to meet the desired memory footprint and inference speed requirements. However, they still suffer from the severe intrinsic instability of the error convergence, resulting in increase in prediction error and its standard deviation, which is mostly caused by the inherently poor representation with only two possible values of [Formula: see text]1 and [Formula: see text]1. In this work, we have proposed a cost-aware layer-wise ensemble method to address the above issue without incurring any excessive costs, which is characterized by (1) layer-wise bagging and (2) cost-aware layer selection for the bagging. One of the experimental results has shown that the proposed method reduces the error and its standard deviation by 15% and 54% on CIFAR-10, respectively, compared to the BNN serving as a baseline. This paper demonstrated and discussed such error reduction and stability performance with high versatility based on the comparison results under the various cases of combinations of the network base model with the proposed and the state-of-the-art prior techniques while changing the network sizes and datasets of CIFAR-10, SVHN, and MNIST for the evaluation.
APA, Harvard, Vancouver, ISO, and other styles
19

Gao, Jiabao, Qingliang Liu, and Jinmei Lai. "An Approach of Binary Neural Network Energy-Efficient Implementation." Electronics 10, no. 15 (July 30, 2021): 1830. http://dx.doi.org/10.3390/electronics10151830.

Full text
Abstract:
Binarized neural networks (BNNs), which have 1-bit weights and activations, are well suited for FPGA accelerators as their dominant computations are bitwise arithmetic, and the reduction in memory requirements means that all the network parameters can be stored in internal memory. However, the energy efficiency of these accelerators is still restricted by the abundant redundancies in BNNs. This hinders their deployment for applications in smart sensors and tiny devices because these scenarios have tight constraints with respect to energy consumption. To overcome this problem, we propose an approach to implement BNN inference while offering excellent energy efficiency for the accelerators by means of pruning the massive redundant operations while maintaining the original accuracy of the networks. Firstly, inspired by the observation that the convolution processes of two related kernels contain many repeated computations, we first build one formula to clarify the reusing relationships between their convolutional outputs and remove the unnecessary operations. Furthermore, by generalizing this reusing relationship to one tile of kernels in one neuron, we adopt an inclusion pruning strategy to further skip the superfluous evaluations of the neurons whose real output values can be determined early. Finally, we evaluate our system on the Zynq 7000 XC7Z100 FPGA platform. Our design can prune 51 percent of the operations without any accuracy loss. Meanwhile, the energy efficiency of our system is as high as 6.55 × 105 Img/kJ, which is 118× better than the best accelerator based on an NVDIA Tesla-V100 GPU and 3.6× higher than the state-of-the-art FPGA implementations for BNNs.
APA, Harvard, Vancouver, ISO, and other styles
20

Chang, Liang, Xin Ma, Zhaohao Wang, Youguang Zhang, Yuan Xie, and Weisheng Zhao. "PXNOR-BNN: In/With Spin-Orbit Torque MRAM Preset-XNOR Operation-Based Binary Neural Networks." IEEE Transactions on Very Large Scale Integration (VLSI) Systems 27, no. 11 (November 2019): 2668–79. http://dx.doi.org/10.1109/tvlsi.2019.2926984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Yanfei, Tong Geng, Ang Li, and Huimin Yu. "BCNN: Binary complex neural network." Microprocessors and Microsystems 87 (November 2021): 104359. http://dx.doi.org/10.1016/j.micpro.2021.104359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Xu, Sheng, Chang Liu, Baochang Zhang, Jinhu Lü, Guodong Guo, and David Doermann. "BiRe-ID: Binary Neural Network for Efficient Person Re-ID." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 1s (February 28, 2022): 1–22. http://dx.doi.org/10.1145/3473340.

Full text
Abstract:
Person re-identification (Re-ID) has been promoted by the significant success of convolutional neural networks (CNNs). However, the application of such CNN-based Re-ID methods depends on the tremendous consumption of computation and memory resources, which affects its development on resource-limited devices such as next generation AI chips. As a result, CNN binarization has attracted increasing attention, which leads to binary neural networks (BNNs). In this article, we propose a new BNN-based framework for efficient person Re-ID (BiRe-ID). In this work, we discover that the significant performance drop of binarized models for Re-ID task is caused by the degraded representation capacity of kernels and features. To address the issues, we propose the kernel and feature refinement based on generative adversarial learning (KR-GAL and FR-GAL) to enhance the representation capacity of BNNs. We first introduce an adversarial attention mechanism to refine the binarized kernels based on their real-valued counterparts. Specifically, we introduce a scale factor to restore the scale of 1-bit convolution. And we employ an effective generative adversarial learning method to train the attention-aware scale factor. Furthermore, we introduce a self-supervised generative adversarial network to refine the low-level features using the corresponding high-level semantic information. Extensive experiments demonstrate that our BiRe-ID can be effectively implemented on various mainstream backbones for the Re-ID task. In terms of the performance, our BiRe-ID surpasses existing binarization methods by significant margins, at the level even comparable with the real-valued counterparts. For example, on Market-1501, BiRe-ID achieves 64.0% mAP on ResNet-18 backbone, with an impressive 12.51× speedup in theory and 11.75× storage saving. In particular, the KR-GAL and FR-GAL methods show strong generalization on multiple tasks such as Re-ID, image classification, object detection, and 3D point cloud processing.
APA, Harvard, Vancouver, ISO, and other styles
23

Zou, Wanbing, Song Cheng, Luyuan Wang, Guanyu Fu, Delong Shang, Yumei Zhou, and Yi Zhan. "Increasing Information Entropy of Both Weights and Activations for the Binary Neural Networks." Electronics 10, no. 16 (August 12, 2021): 1943. http://dx.doi.org/10.3390/electronics10161943.

Full text
Abstract:
In terms of memory footprint requirement and computing speed, the binary neural networks (BNNs) have great advantages in power-aware deployment applications, such as AIoT edge terminals, wearable and portable devices, etc. However, the networks’ binarization process inevitably brings considerable information losses, and further leads to accuracy deterioration. To tackle these problems, we initiate analyzing from a perspective of the information theory, and manage to improve the networks information capacity. Based on the analyses, our work has two primary contributions: the first is a newly proposed median loss (ML) regularization technique. It improves the binary weights distribution more evenly, and consequently increases the information capacity of BNNs greatly. The second is the batch median of activations (BMA) method. It raises the entropy of activations by subtracting a median value, and simultaneously lowers the quantization error by computing separate scaling factors for the positive and negative activations procedure. Experiment results prove that the proposed methods utilized in ResNet-18 and ResNet-34 individually outperform the Bi-Real baseline by 1.3% and 0.9% Top-1 accuracy on the ImageNet 2012. Proposed ML and BMA for the storage cost and calculation complexity increments are minor and negligible. Additionally, comprehensive experiments also prove that our methods can be applicable and embedded into the present popular BNN networks with accuracy improvement and negligible overhead increment.
APA, Harvard, Vancouver, ISO, and other styles
24

de Sousa, André L., Mário P. Véstias, and Horácio C. Neto. "Multi-Model Inference Accelerator for Binary Convolutional Neural Networks." Electronics 11, no. 23 (November 30, 2022): 3966. http://dx.doi.org/10.3390/electronics11233966.

Full text
Abstract:
Binary convolutional neural networks (BCNN) have shown good accuracy for small to medium neural network models. Their extreme quantization of weights and activations reduces off-chip data transfer and greatly reduces the computational complexity of convolutions. Further reduction in the complexity of a BCNN model for fast execution can be achieved with model size reduction at the cost of network accuracy. In this paper, a multi-model inference technique is proposed to reduce the execution time of the binarized inference process without accuracy reduction. The technique considers a cascade of neural network models with different computation/accuracy ratios. A parameterizable binarized neural network with different trade-offs between complexity and accuracy is used to obtain multiple network models. We also propose a hardware accelerator to run multi-model inference throughput in embedded systems. The multi-model inference accelerator is demonstrated on low-density Zynq-7010 and Zynq-7020 FPGA devices, classifying images from the CIFAR-10 dataset. The proposed accelerator improves the frame rate per number of LUTs by 7.2× those of previous solutions on a ZYNQ7020 FPGA with similar accuracy. This shows the effectiveness of the multi-model inference technique and the efficiency of the proposed hardware accelerator.
APA, Harvard, Vancouver, ISO, and other styles
25

Peng, Hanyu, and Shifeng Chen. "BDNN: Binary convolution neural networks for fast object detection." Pattern Recognition Letters 125 (July 2019): 91–97. http://dx.doi.org/10.1016/j.patrec.2019.03.026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

PENG, YUN, and MIAO JIN. "A NEURAL NETWORK APPROACH TO APPROXIMATING MAP IN BELIEF NETWORKS." International Journal of Neural Systems 12, no. 03n04 (June 2002): 271–90. http://dx.doi.org/10.1142/s0129065702001175.

Full text
Abstract:
Bayesian belief networks (BBN) are a widely studied graphical model for representing uncertainty and probabilistic interdependence among variables. One of the factors that restricts the model's wide acceptance in practical applications is that the general inference with BBN is NP-hard. This is also true for the maximum a posteriori probability (MAP) problem, which is to find the most probable joint value assignment to all uninstantiated variables, given instantiation of some variables in a BBN . To circumvent the difficulty caused by MAP's computational complexity, we suggest in this paper a neural network approximation approach. With this approach, a BBN is treated as a neural network without any change or transformation of the network structure, and the node activation functions are derived based on an energy function defined over a given BBN. Three methods are developed. They are the hill-climbing style discrete method, the simulated annealing method, and the continuous method based on the mean field theory. All three methods are for BBN of general structures, with the restriction that nodes of BBN are binary variables. In addition, rules for applying these methods to noisy-or networks are also developed, which may lead to more efficient computation in some cases. These methods' convergence is analyzed, and their validity tested through a series of computer experiments with two BBN of moderate size and complexity. Although additional theoretical and empirical work is needed, the analysis and experiments suggest that this approach may lead to effective and accurate approximation for MAP problems.
APA, Harvard, Vancouver, ISO, and other styles
27

Coluccio, Andrea, Marco Vacca, and Giovanna Turvani. "Logic-in-Memory Computation: Is It Worth It? A Binary Neural Network Case Study." Journal of Low Power Electronics and Applications 10, no. 1 (February 22, 2020): 7. http://dx.doi.org/10.3390/jlpea10010007.

Full text
Abstract:
Recently, the Logic-in-Memory (LiM) concept has been widely studied in the literature. This paradigm represents one of the most efficient ways to solve the limitations of a Von Neumann’s architecture: by placing simple logic circuits inside or near a memory element, it is possible to obtain a local computation without the need to fetch data from the main memory. Although this concept introduces a lot of advantages from a theoretical point of view, its implementation could introduce an increasing complexity overhead of the memory itself, leading to a more sophisticated design flow. As a case study, Binary Neural Networks (BNNs) have been chosen. BNNs binarize both weights and inputs, transforming multiply-and-accumulate into a simpler bitwise logical operation while maintaining high accuracy, making them well-suited for a LiM implementation. In this paper, we present two circuits implementing a BNN model in CMOS technology. The first one, called Out-Of-Memory (OOM) architecture, is implemented following a standard Von Neumann structure. The same architecture was redesigned to adapt the critical part of the algorithm for a modified memory, which is also capable of executing logic calculations. By comparing both OOM and LiM architectures we aim to evaluate if Logic-in-Memory paradigm is worth it. The results highlight that LiM architectures have a clear advantage over Von Neumann architectures, allowing a reduction in energy consumption while increasing the overall speed of the circuit.
APA, Harvard, Vancouver, ISO, and other styles
28

Choi, Jeong Hwan, Young-Ho Gong, and Sung Woo Chung. "A System-Level Exploration of Binary Neural Network Accelerators with Monolithic 3D Based Compute-in-Memory SRAM." Electronics 10, no. 5 (March 8, 2021): 623. http://dx.doi.org/10.3390/electronics10050623.

Full text
Abstract:
Binary neural networks (BNNs) are adequate for energy-constrained embedded systems thanks to binarized parameters. Several researchers have proposed the compute-in-memory (CiM) SRAMs for XNOR-and-accumulation computations (XACs) in BNNs by adding additional transistors to the conventional 6T SRAM, which reduce the latency and energy of the data movements. However, due to the additional transistors, the CiM SRAMs suffer from larger area and longer wires than the conventional 6T SRAMs. Meanwhile, monolithic 3D (M3D) integration enables fine-grained 3D integration, reducing the 2D wire length in small functional units. In this paper, we propose a BNN accelerator (BNN_Accel), composed of a 9T CiM SRAM (CiM_SRAM), input buffer, and global periphery logic, to execute the computations in the binarized convolution layers of BNNs. We also propose CiM_SRAM with the subarray-level M3D integration (as well as the transistor-level M3D integration), which reduces the wire latency and energy compared to the 2D planar CiM_SRAM. Across the binarized convolution layers, our simulation results show that BNN_Accel with the 4-layer CiM_SRAM reduces the average execution time and energy by 39.9% and 23.2%, respectively, compared to BNN_Accel with the 2D planar CiM_SRAM.
APA, Harvard, Vancouver, ISO, and other styles
29

Gundersen, Kristian, Guttorm Alendal, Anna Oleynik, and Nello Blaser. "Binary Time Series Classification with Bayesian Convolutional Neural Networks When Monitoring for Marine Gas Discharges." Algorithms 13, no. 6 (June 19, 2020): 145. http://dx.doi.org/10.3390/a13060145.

Full text
Abstract:
The world’s oceans are under stress from climate change, acidification and other human activities, and the UN has declared 2021–2030 as the decade for marine science. To monitor the marine waters, with the purpose of detecting discharges of tracers from unknown locations, large areas will need to be covered with limited resources. To increase the detectability of marine gas seepage we propose a deep probabilistic learning algorithm, a Bayesian Convolutional Neural Network (BCNN), to classify time series of measurements. The BCNN will classify time series to belong to a leak/no-leak situation, including classification uncertainty. The latter is important for decision makers who must decide to initiate costly confirmation surveys and, hence, would like to avoid false positives. Results from a transport model are used for the learning process of the BCNN and the task is to distinguish the signal from a leak hidden within the natural variability. We show that the BCNN classifies time series arising from leaks with high accuracy and estimates its associated uncertainty. We combine the output of the BCNN model, the posterior predictive distribution, with a Bayesian decision rule showcasing how the framework can be used in practice to make optimal decisions based on a given cost function.
APA, Harvard, Vancouver, ISO, and other styles
30

Babu, Bileesh Plakkal, and Swathi Jamjala Narayanan. "One-vs-All Convolutional Neural Networks for Synthetic Aperture Radar Target Recognition." Cybernetics and Information Technologies 22, no. 3 (September 1, 2022): 179–97. http://dx.doi.org/10.2478/cait-2022-0035.

Full text
Abstract:
Abstract Convolutional Neural Networks (CNN) have been widely utilized for Automatic Target Recognition (ATR) in Synthetic Aperture Radar (SAR) images. However, a large number of parameters and a huge training data requirements limit CNN’s use in SAR ATR. While previous works have primarily focused on model compression and structural modification of CNN, this paper employs the One-Vs-All (OVA) technique on CNN to address these issues. OVA-CNN comprises several Binary classifying CNNs (BCNNs) that act as an expert in correctly recognizing a single target. The BCNN that predicts the highest probability for a given target determines the class to which the target belongs. The evaluation of the model using various metrics on the Moving and Stationary Target Acquisition and Recognition (MSTAR) benchmark dataset illustrates that the OVA-CNN has fewer weight parameters and training sample requirements while exhibiting a high recognition rate.
APA, Harvard, Vancouver, ISO, and other styles
31

Lee, Su-Jung, Gil-Ho Kwak, and Tae-Hwan Kim. "TORRES: A Resource-Efficient Inference Processor for Binary Convolutional Neural Networks Based on Locality-Aware Operation Skipping." Electronics 11, no. 21 (October 29, 2022): 3534. http://dx.doi.org/10.3390/electronics11213534.

Full text
Abstract:
A binary convolutional neural network (BCNN) is a neural network promising to realize analysis of visual imagery in low-cost resource-limited devices. This study presents an efficient inference processor for BCNNs, named TORRES. TORRES performs inference efficiently, skipping operations based on the spatial locality inherent in feature maps. The training process is regularized with the objective of skipping more operations. The microarchitecture is designed to skip operations and generate addresses efficiently with low resource usage. A prototype inference system based on TORRES has been implemented in a 28 nm field-programmable gate array, and its functionality has been verified for practical inference tasks. Implemented with 2.31 K LUTs, TORRES achieves the inference speed of 291.2 GOP/s, exhibiting the resource efficiency of 126.06 MOP/s/LUT. The resource efficiency of TORRES is 1.45 times higher than that of the state-of-the-art work.
APA, Harvard, Vancouver, ISO, and other styles
32

Siddiqui, Shama, Rory Nesbitt, Muhammad Zeeshan Shakir, Anwar Ahmed Khan, Ausaf Ahmed Khan, Karima Karam Khan, and Naeem Ramzan. "Artificial Neural Network (ANN) Enabled Internet of Things (IoT) Architecture for Music Therapy." Electronics 9, no. 12 (November 29, 2020): 2019. http://dx.doi.org/10.3390/electronics9122019.

Full text
Abstract:
Alternative medicine techniques such as music therapy have been a recent interest of medical practitioners and researchers. Significant clinical evidence suggests that music has a positive influence over pain, stress and anxiety for the patients of cancer, pre and post surgery, insomnia, child birth, end of life care, etc. Similarly, the technologies of Internet of Things (IoT), Body Area Networks (BAN) and Artificial Neural Networks (ANN) have been playing a vital role to improve the health and safety of the population through offering continuous remote monitoring facilities and immediate medical response. In this article, we propose a novel ANN enabled IoT architecture to integrate music therapy with BAN and ANN for providing immediate assistance to patients by automating the process of music therapy. The proposed architecture comprises of monitoring the body parameters of patients using BAN, categorizing the disease using ANN and playing music of the most appropriate type over the patient’s handheld device, when required. In addition, the ANN will also exploit Music Analytics such as the type and duration of music played and its impact over patient’s body parameters to iteratively improve the process of automated music therapy. We detail development of a prototype Android app which builds a playlist and plays music according to the emotional state of the user, in real time. Data for pulse rate, blood pressure and breath rate has been generated using Node-Red, and ANN has been created using Google Colaboratory (Colab). MQTT broker has been used to send generated data to Android device. The ANN uses binary and categorical cross-entropy loss functions, Adam optimiser and ReLU activation function to predict the mood of patient and suggest the most appropriate type of music.
APA, Harvard, Vancouver, ISO, and other styles
33

Sarker, Ronobir, Amandeep Kaur, and D. Singh. "Noise Estimation Using Back Propagation Neural Networks." ECS Transactions 107, no. 1 (April 24, 2022): 18761–68. http://dx.doi.org/10.1149/10701.18761ecst.

Full text
Abstract:
In this paper, a new Backpropagation Neural Network-based noise estimation method is proposed to estimate Rician noise from MRI images. To train BNN features of MRI images such as contrast, homogeneity, dissimilarity, asm, energy, entropy, mean x, mean y, mean glcm, var x, var y, var glcm, correlation, skew x, skew y, skew, kurtosis x, kurtosis y, kurtosis, etc. are used. For training BNN, 450 images are used which are downloaded from BrainWeb.
APA, Harvard, Vancouver, ISO, and other styles
34

Wagner, Philipp, Xinyang Wu, and Marco F. Huber. "Kalman Bayesian Neural Networks for Closed-Form Online Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 10069–77. http://dx.doi.org/10.1609/aaai.v37i8.26200.

Full text
Abstract:
Compared to point estimates calculated by standard neural networks, Bayesian neural networks (BNN) provide probability distributions over the output predictions and model parameters, i.e., the weights. Training the weight distribution of a BNN, however, is more involved due to the intractability of the underlying Bayesian inference problem and thus, requires efficient approximations. In this paper, we propose a novel approach for BNN learning via closed-form Bayesian inference. For this purpose, the calculation of the predictive distribution of the output and the update of the weight distribution are treated as Bayesian filtering and smoothing problems, where the weights are modeled as Gaussian random variables. This allows closed-form expressions for training the network's parameters in a sequential/online fashion without gradient descent. We demonstrate our method on several UCI datasets and compare it to the state of the art.
APA, Harvard, Vancouver, ISO, and other styles
35

Park, Namuk, Taekyu Lee, and Songkuk Kim. "Vector Quantized Bayesian Neural Network Inference for Data Streams." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 9322–30. http://dx.doi.org/10.1609/aaai.v35i10.17124.

Full text
Abstract:
Bayesian neural networks (BNN) can estimate the uncertainty in predictions, as opposed to non-Bayesian neural networks (NNs). However, BNNs have been far less widely used than non-Bayesian NNs in practice since they need iterative NN executions to predict a result for one data, and it gives rise to prohibitive computational cost. This computational burden is a critical problem when processing data streams with low-latency. To address this problem, we propose a novel model VQ-BNN, which approximates BNN inference for data streams. In order to reduce the computational burden, VQ-BNN inference predicts NN only once and compensates the result with previously memorized predictions. To be specific, VQ-BNN inference for data streams is given by temporal exponential smoothing of recent predictions. The computational cost of this model is almost the same as that of non-Bayesian NNs. Experiments including semantic segmentation on real-world data show that this model performs significantly faster than BNNs while estimating predictive results comparable to or superior to the results of BNNs.
APA, Harvard, Vancouver, ISO, and other styles
36

McCormack, Michael D., David E. Zaucha, and Dennis W. Dushek. "First‐break refraction event picking and seismic data trace editing using neural networks." GEOPHYSICS 58, no. 1 (January 1993): 67–78. http://dx.doi.org/10.1190/1.1443352.

Full text
Abstract:
Interactive seismic processing systems for editing noisy seismic traces and picking first‐break refraction events have been developed using a neural network learning algorithm. We employ a backpropagation neural network (BNN) paradigm modified to improve the convergence rate of the BNN. The BNN is interactively “trained” to edit seismic data or pick first breaks by a human processor who judiciously selects and presents to the network examples of trace edits or refraction picks. The network then iteratively adjusts a set of internal weights until it can accurately duplicate the examples provided by the user. After the training session is completed, the BNN system can then process new data sets in a manner that mimics the human processor. Synthetic modeling studies indicate that the BNN uses many of the same subjective criteria that humans employ in editing and picking seismic data sets. Automated trace editing and first‐break picking based on the modified BNN paradigm achieve 90 to 98 percent agreement with manual methods for seismic data of moderate to good quality. Productivity increases over manual editing, and picking techniques range from 60 percent for two‐dimensional (2-D) data sets and up to 800 percent for three‐dimensional (3-D) data sets. Neural network‐based seismic processing can provide consistent and high quality results with substantial improvements in processing efficiency.
APA, Harvard, Vancouver, ISO, and other styles
37

Świetlicka, Aleksandra, Karol Gugała, Marta Kolasa, Jolanta Pauk, Andrzej Rybarczyk, and Rafał Długosz. "A New Model of the Neuron for Biological Spiking Neural Network Suitable for Parallel Data Processing Realized in Hardware." Solid State Phenomena 199 (March 2013): 217–22. http://dx.doi.org/10.4028/www.scientific.net/ssp.199.217.

Full text
Abstract:
The paper presents a modification of the structure of a biological neural network (BNN) based on spiking neuron models. The proposed modification allows to influence the level of the stimulus response of particular neurons in the BNN. We consider an extended, three-dimensional Hodgkin-Huxley model of the neural cell. A typical BNN composed of such neural cells have been expanded by addition of resistors in each branch point. The resistors can be treated as the weights in such BNN. We demonstrate that adding these elements to the BNN significantly affects the waveform of the potential on the membrane of the neuron, causing an uncontrolled excitation. This provides a better description of processes that take place in nervous cell. Such BNN enables an easy adaptation of the learning rules used in artificial or spiking neural networks. The modified BNN has been implemented on Graphics Processing Unit (GPU) in the CUDA C language. This platform enables a parallel data processing, which is an important feature in such applications.
APA, Harvard, Vancouver, ISO, and other styles
38

Liang, Jiuzhen, Wei Song, and Mei Wang. "Stock Price Prediction Based on Procedural Neural Networks." Advances in Artificial Neural Systems 2011 (June 15, 2011): 1–11. http://dx.doi.org/10.1155/2011/814769.

Full text
Abstract:
We present a spatiotemporal model, namely, procedural neural networks for stock price prediction. Compared with some successful traditional models on simulating stock market, such as BNN (backpropagation neural networks, HMM (hidden Markov model) and SVM (support vector machine)), the procedural neural network model processes both spacial and temporal information synchronously without slide time window, which is typically used in the well-known recurrent neural networks. Two different structures of procedural neural networks are constructed for modeling multidimensional time series problems. Learning algorithms for training the models and sustained improvement of learning are presented and discussed. Experiments on Yahoo stock market of the past decade years are implemented, and simulation results are compared by PNN, BNN, HMM, and SVM.
APA, Harvard, Vancouver, ISO, and other styles
39

Nguyen, Andre T., Fred Lu, Gary Lopez Munoz, Edward Raff, Charles Nicholas, and James Holt. "Out of Distribution Data Detection Using Dropout Bayesian Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7877–85. http://dx.doi.org/10.1609/aaai.v36i7.20757.

Full text
Abstract:
We explore the utility of information contained within a dropout based Bayesian neural network (BNN) for the task of detecting out of distribution (OOD) data. We first show how previous attempts to leverage the randomized embeddings induced by the intermediate layers of a dropout BNN can fail due to the distance metric used. We introduce an alternative approach to measuring embedding uncertainty, and demonstrate how incorporating embedding uncertainty improves OOD data identification across three tasks: image classification, language classification, and malware detection.
APA, Harvard, Vancouver, ISO, and other styles
40

Jiang, Chuan Jin. "The Application of Bayesian Neural Network in Rainfall Forecasting." Key Engineering Materials 439-440 (June 2010): 1300–1305. http://dx.doi.org/10.4028/www.scientific.net/kem.439-440.1300.

Full text
Abstract:
The process of Rainfall Forecasting very complex and highly nonlinear and exhibits both temporal and spatial variability’s, In this article, a Rainfall Forecasting model using the Bayesian neural networks (BNN) is proposed for Rainfall Forecasting. The study uses the data from a coastal forest catchment. This article studies the accuracy of the short-term rainfall forecast obtained by BNN time-series analysis techniques and using antecedent rainfall depths and stream flow as the input information. The verification results from the proposed model indicate that the approach of BNN Rainfall Forecasting model presented in this paper shows a reasonable agreement in Rainfall Forecasting modeling with high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
41

Bae, Seongwoo, Haechan Kim, Seongjoo Lee, and Yunho Jung. "FPGA Implementation of Keyword Spotting System Using Depthwise Separable Binarized and Ternarized Neural Networks." Sensors 23, no. 12 (June 19, 2023): 5701. http://dx.doi.org/10.3390/s23125701.

Full text
Abstract:
Keyword spotting (KWS) systems are used for human–machine communications in various applications. In many cases, KWS involves a combination of wake-up-word (WUW) recognition for device activation and voice command classification tasks. These tasks present a challenge for embedded systems due to the complexity of deep learning algorithms and the need for optimized networks for each application. In this paper, we propose a depthwise separable binarized/ternarized neural network (DS-BTNN) hardware accelerator capable of performing both WUW recognition and command classification on a single device. The design achieves significant area efficiency by redundantly utilizing bitwise operators in the computation of the binarized neural network (BNN) and ternary neural network (TNN). In a complementary metal-oxide semiconductor (CMOS) 40 nm process environment, the DS-BTNN accelerator demonstrated significant efficiency. Compared with a design approach where BNN and TNN were independently developed and subsequently integrated as two separate modules into the system, our method achieved a 49.3% area reduction while yielding an area of 0.558 mm2. The designed KWS system, which was implemented on a Xilinx UltraScale+ ZCU104 field-programmable gate array (FPGA) board, receives real-time data from the microphone, preprocesses them into a mel spectrogram, and uses this as input to the classifier. Depending on the order, the network operates as a BNN or a TNN for WUW recognition and command classification, respectively. Operating at 170 MHz, our system achieved 97.1% accuracy in BNN-based WUW recognition and 90.5% in TNN-based command classification.
APA, Harvard, Vancouver, ISO, and other styles
42

Yu, Haofan, Alyandra Hami Seno, Zahra Sharif Khodaei, and M. H. Ferri Aliabadi. "Structural Health Monitoring Impact Classification Method Based on Bayesian Neural Network." Polymers 14, no. 19 (September 21, 2022): 3947. http://dx.doi.org/10.3390/polym14193947.

Full text
Abstract:
This paper proposes a novel method for multi-class classification and uncertainty quantification of impact events on a flat composite plate with a structural health monitoring (SHM) system by using a Bayesian neural network (BNN). Most of the existing research in passive sensing has focused on deterministic approaches for impact detection and characterization. However, there are variability in impact location, angle and energy in real operational conditions which results in uncertainty in the diagnosis. Therefore, this paper proposes a reliability-based impact characterization method based on BNN for the first time. Impact data are acquired by a passive sensing system of piezoelectric (PZT) sensors. Features extracted from the sensor signals, such as their transferred energy, frequency at maximum amplitude and time interval of the largest peak, are used to develop a BNN for impact classification (i.e., energy level). To test the robustness and reliability of the proposed model to impact variability, it is trained with perpendicular impacts and tested by variable angle impacts. The same dataset is further applied in a method called multi-artificial neural network (multi-ANN) to compare its ability in uncertainty quantification and its computational efficiency against the BNN for validation of the developed meta-model. It is demonstrated that both the BNN and multi-ANN can measure the uncertainty and confidence of the diagnosis from the prediction results. Both have very high performance in classifying impact energies when the networks are trained and tested with perpendicular impacts of different energy and location, with 94% and 98% reliable predictions for BNN and multi-ANN, respectively. However, both metamodels struggled to detect new impact scenarios (angled impacts) when the data set was not used in the development stage and only used for testing. Including additional features improved the performance of the networks in regularization; however, not to the acceptable accuracy. The BNN significantly outperforms the multi-ANN in computational time and resources. For perpendicular impacts, both methods can reach a reliable accuracy, while for angled impacts, the accuracy decreases but the uncertainty provides additional information that can be further used to improve the classification.
APA, Harvard, Vancouver, ISO, and other styles
43

Herdiansah, Arief, Rohmat Indra Borman, Desi Nurnaningsih, Alfry Aristo J. Sinlae, and Rosyid Ridlo Al Hakim. "Klasifikasi Citra Daun Herbal Dengan Menggunakan Backpropagation Neural Networks Berdasarkan Ekstraksi Ciri Bentuk." JURIKOM (Jurnal Riset Komputer) 9, no. 2 (April 29, 2022): 388. http://dx.doi.org/10.30865/jurikom.v9i2.4066.

Full text
Abstract:
Since ancient times until now herbal plants have been used for treatment and have been applied in the world of health to this day. All parts of the plant can be used as medicine, one of which is the leaves. However, there are still many people who are not familiar with the medicinal leaves. This is because the leaves at first glance look almost the same, making it difficult to tell them apart. Actually, if you look closely, the leaves have characteristics that can be distinguished from one leaf to another. The purpose of this study is to classify images of herbal leaf species using the Backpropagation Neural Network (BNN) algorithm with shape feature extraction utilizing metric and eccentricity parameters. BNN is a type of supervised learning algorithm that consists of several layers and uses an error output as a modifier of the weight value backwards. In this study, the extraction of shape features that become input for the BNN algorithm will go through morphological operations to improve the segmentation results so that the classification results are more optimal. The test results show an accuracy of 88.75%, this shows the developed model can classify herbal leaves well
APA, Harvard, Vancouver, ISO, and other styles
44

Guan, Qing-yang, and Wu Shuang. "Signal Detection in Satellite-Ground IoT Link Based on Blind Neural Network." Wireless Communications and Mobile Computing 2021 (May 6, 2021): 1–10. http://dx.doi.org/10.1155/2021/5547989.

Full text
Abstract:
At present, there are many problems in satellite-ground IoT link signal detection. Due to the complex characteristics of the satellite-ground IoT link, including Doppler and multipath effect, especially in scenarios related to military fields, it is difficult to use traditional method and traditional cooperative communication methods for link signal detection. Therefore, this paper proposes an efficient detection of satellite-ground IoT link based on the blind neural network (BNN). The BNN includes two network structures, the data feature network and the error update network. Through multiple iterations of the error update network, the weight of BNN for blind detection is optimized and the optimal elimination solution is obtained. Through establishing a satellite-to-ground link model simulation of the low-orbit satellite, the proposed BNN algorithm can obtain better bit error rate characteristics.
APA, Harvard, Vancouver, ISO, and other styles
45

Chauhan, Krishan Kumar, Garima Joshi, Manjeet Kaur, and Renu Vig. "Semiconductor wafer defect classification using convolution neural network: a binary case." IOP Conference Series: Materials Science and Engineering 1225, no. 1 (February 1, 2022): 012060. http://dx.doi.org/10.1088/1757-899x/1225/1/012060.

Full text
Abstract:
Abstract With the multitude of steps used in semiconductor industry, automation is being practiced extensively in its manufacturing processes to guarantee quality of manufactured chips and improvement in production. At front end of line, wafer are probed and defective chips are segregated. From this data wafer bin map are generated, these show defects on the surface of wafers. If analysis of wafer bin map is done manually, this may result in incorrect categorization of defects due to human error and lack of judgement. Thus, the rationale behind this research study is to determine the scope of vision-based methods for automatic classification of wafer defects. In view of this, a classifier which detects Type A and Type B defects in wafers is proposed. It involves a Convolution Neural Network (CNN) based binary classifier. 250 Images of each class of wafer bin map dataset, is used to conduct the study on five layer CNN architecture. The proposed setup gives the test accuracy of 97.7%.
APA, Harvard, Vancouver, ISO, and other styles
46

Zanotti, Tommaso, Francesco Maria Puglisi, and Paolo Pavan. "Energy-Efficient Non-Von Neumann Computing Architecture Supporting Multiple Computing Paradigms for Logic and Binarized Neural Networks." Journal of Low Power Electronics and Applications 11, no. 3 (July 6, 2021): 29. http://dx.doi.org/10.3390/jlpea11030029.

Full text
Abstract:
Different in-memory computing paradigms enabled by emerging non-volatile memory technologies are promising solutions for the development of ultra-low-power hardware for edge computing. Among these, SIMPLY, a smart logic-in-memory architecture, provides high reconfigurability and enables the in-memory computation of both logic operations and binarized neural networks (BNNs) inference. However, operation-specific hardware accelerators can result in better performance for a particular task, such as the analog computation of the multiply and accumulate operation for BNN inference, but lack reconfigurability. Nonetheless, a solution providing the flexibility of SIMPLY while also achieving the high performance of BNN-specific analog hardware accelerators is missing. In this work, we propose a novel in-memory architecture based on 1T1R crossbar arrays, which enables the coexistence on the same crossbar array of both SIMPLY computing paradigm and the analog acceleration of the multiply and accumulate operation for BNN inference. We also highlight the main design tradeoffs and opportunities enabled by different emerging non-volatile memory technologies. Finally, by using a physics-based Resistive Random Access Memory (RRAM) compact model calibrated on data from the literature, we show that the proposed architecture improves the energy delay product by >103 times when performing a BNN inference task with respect to a SIMPLY implementation.
APA, Harvard, Vancouver, ISO, and other styles
47

JUNIOR, GERALDO BRAZ, LEONARDO DE OLIVEIRA MARTINS, ARISTÓFANES CORREA SILVA, and ANSELMO CARDOSO PAIVA. "COMPARISON OF SUPPORT VECTOR MACHINES AND BAYESIAN NEURAL NETWORKS PERFORMANCE FOR BREAST TISSUES USING GEOSTATISTICAL FUNCTIONS IN MAMMOGRAPHIC IMAGES." International Journal of Computational Intelligence and Applications 09, no. 04 (December 2010): 271–88. http://dx.doi.org/10.1142/s1469026810002914.

Full text
Abstract:
Female breast cancer is a major cause of deaths in occidental countries. Computer-aided Detection (CAD) systems can aid radiologists to increase diagnostic accuracy. In this work, we present a comparison between two classifiers applied to the separation of normal and abnormal breast tissues from mammograms. The purpose of the comparison is to select the best prediction technique to be part of a CAD system. Each region of interest is classified through a Support Vector Machine (SVM) and a Bayesian Neural Network (BNN) as normal or abnormal region. SVM is a machine-learning method, based on the principle of structural risk minimization, which shows good performance when applied to data outside the training set. A Bayesian Neural Network is a classifier that joins traditional neural networks theory and Bayesian inference. We use a set of measures obtained by the application of the semivariogram, semimadogram, covariogram, and correlogram functions to the characterization of breast tissue as normal or abnormal. The results show that SVM presents best performance for the classification of breast tissues in mammographic images. The tests indicate that SVM has more generalization power than the BNN classifier. BNN has a sensibility of 76.19% and a specificity of 79.31%, while SVM presents a sensibility of 74.07% and a specificity of 98.77%. The accuracy rate for tests is 78.70% and 92.59% for BNN and SVM, respectively.
APA, Harvard, Vancouver, ISO, and other styles
48

Biswal, Manas Ranjan, Tahesin Samira Delwar, Abrar Siddique, Prangyadarsini Behera, Yeji Choi, and Jee-Youl Ryu. "Pattern Classification Using Quantized Neural Networks for FPGA-Based Low-Power IoT Devices." Sensors 22, no. 22 (November 10, 2022): 8694. http://dx.doi.org/10.3390/s22228694.

Full text
Abstract:
With the recent growth of the Internet of Things (IoT) and the demand for faster computation, quantized neural networks (QNNs) or QNN-enabled IoT can offer better performance than conventional convolution neural networks (CNNs). With the aim of reducing memory access costs and increasing the computation efficiency, QNN-enabled devices are expected to transform numerous industrial applications with lower processing latency and power consumption. Another form of QNN is the binarized neural network (BNN), which has 2 bits of quantized levels. In this paper, CNN-, QNN-, and BNN-based pattern recognition techniques are implemented and analyzed on an FPGA. The FPGA hardware acts as an IoT device due to connectivity with the cloud, and QNN and BNN are considered to offer better performance in terms of low power and low resource use on hardware platforms. The CNN and QNN implementation and their comparative analysis are analyzed based on their accuracy, weight bit error, RoC curve, and execution speed. The paper also discusses various approaches that can be deployed for optimizing various CNN and QNN models with additionally available tools. The work is performed on the Xilinx Zynq 7020 series Pynq Z2 board, which serves as our FPGA-based low-power IoT device. The MNIST and CIFAR-10 databases are considered for simulation and experimentation. The work shows that the accuracy is 95.5% and 79.22% for the MNIST and CIFAR-10 databases, respectively, for full precision (32-bit), and the execution time is 5.8 ms and 18 ms for the MNIST and CIFAR-10 databases, respectively, for full precision (32-bit).
APA, Harvard, Vancouver, ISO, and other styles
49

Vidyasagar, M. "Are analog neural networks better than binary neural networks?" Circuits, Systems, and Signal Processing 17, no. 2 (March 1998): 243–70. http://dx.doi.org/10.1007/bf01202855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Zeng, Xia, Zhengfeng Yang, Li Zhang, Xiaochao Tang, Zhenbing Zeng, and Zhiming Liu. "Safety Verification of Nonlinear Systems with Bayesian Neural Network Controllers." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 15278–86. http://dx.doi.org/10.1609/aaai.v37i12.26782.

Full text
Abstract:
Bayesian neural networks (BNNs) retain NN structures with a probability distribution placed over their weights. With the introduced uncertainties and redundancies, BNNs are proper choices of robust controllers for safety-critical control systems. This paper considers the problem of verifying the safety of nonlinear closed-loop systems with BNN controllers over unbounded-time horizon. In essence, we compute a safe weight set such that as long as the BNN controller is always applied with weights sampled from the safe weight set, the controlled system is guaranteed to be safe. We propose a novel two-phase method for the safe weight set computation. First, we construct a reference safe control set that constraints the control inputs, through polynomial approximation to the BNN controller followed by polynomial-optimization-based barrier certificate generation. Then, the computation of safe weight set is reduced to a range inclusion problem of the BNN on the system domain w.r.t. the safe control set, which can be solved incrementally and the set of safe weights can be extracted. Compared with the existing method based on invariant learning and mixed-integer linear programming, we could compute safe weight sets with larger radii on a series of linear benchmarks. Moreover, experiments on a series of widely used nonlinear control tasks show that our method can synthesize large safe weight sets with probability measure as high as 95% even for a large-scale system of dimension 7.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography