Gotowa bibliografia na temat „Binary neural networks (BNN)”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Binary neural networks (BNN)”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Binary neural networks (BNN)"

1

Rozen, Tal, Moshe Kimhi, Brian Chmiel, Avi Mendelson i Chaim Baskin. "Bimodal-Distributed Binarized Neural Networks". Mathematics 10, nr 21 (3.11.2022): 4107. http://dx.doi.org/10.3390/math10214107.

Pełny tekst źródła
Streszczenie:
Binary neural networks (BNNs) are an extremely promising method for reducing deep neural networks’ complexity and power consumption significantly. Binarization techniques, however, suffer from ineligible performance degradation compared to their full-precision counterparts. Prior work mainly focused on strategies for sign function approximation during the forward and backward phases to reduce the quantization error during the binarization process. In this work, we propose a bimodal-distributed binarization method (BD-BNN). The newly proposed technique aims to impose a bimodal distribution of the network weights by kurtosis regularization. The proposed method consists of a teacher–trainer training scheme termed weight distribution mimicking (WDM), which efficiently imitates the full-precision network weight distribution to their binary counterpart. Preserving this distribution during binarization-aware training creates robust and informative binary feature maps and thus it can significantly reduce the generalization error of the BNN. Extensive evaluations on CIFAR-10 and ImageNet demonstrate that our newly proposed BD-BNN outperforms current state-of-the-art schemes.
Style APA, Harvard, Vancouver, ISO itp.
2

Cho, Jaechan, Yongchul Jung, Seongjoo Lee i Yunho Jung. "Reconfigurable Binary Neural Network Accelerator with Adaptive Parallelism Scheme". Electronics 10, nr 3 (20.01.2021): 230. http://dx.doi.org/10.3390/electronics10030230.

Pełny tekst źródła
Streszczenie:
Binary neural networks (BNNs) have attracted significant interest for the implementation of deep neural networks (DNNs) on resource-constrained edge devices, and various BNN accelerator architectures have been proposed to achieve higher efficiency. BNN accelerators can be divided into two categories: streaming and layer accelerators. Although streaming accelerators designed for a specific BNN network topology provide high throughput, they are infeasible for various sensor applications in edge AI because of their complexity and inflexibility. In contrast, layer accelerators with reasonable resources can support various network topologies, but they operate with the same parallelism for all the layers of the BNN, which degrades throughput performance at certain layers. To overcome this problem, we propose a BNN accelerator with adaptive parallelism that offers high throughput performance in all layers. The proposed accelerator analyzes target layer parameters and operates with optimal parallelism using reasonable resources. In addition, this architecture is able to fully compute all types of BNN layers thanks to its reconfigurability, and it can achieve a higher area–speed efficiency than existing accelerators. In performance evaluation using state-of-the-art BNN topologies, the designed BNN accelerator achieved an area–speed efficiency 9.69 times higher than previous FPGA implementations and 24% higher than existing VLSI implementations for BNNs.
Style APA, Harvard, Vancouver, ISO itp.
3

Sunny, Febin P., Asif Mirza, Mahdi Nikdast i Sudeep Pasricha. "ROBIN: A Robust Optical Binary Neural Network Accelerator". ACM Transactions on Embedded Computing Systems 20, nr 5s (31.10.2021): 1–24. http://dx.doi.org/10.1145/3476988.

Pełny tekst źródła
Streszczenie:
Domain specific neural network accelerators have garnered attention because of their improved energy efficiency and inference performance compared to CPUs and GPUs. Such accelerators are thus well suited for resource-constrained embedded systems. However, mapping sophisticated neural network models on these accelerators still entails significant energy and memory consumption, along with high inference time overhead. Binarized neural networks (BNNs), which utilize single-bit weights, represent an efficient way to implement and deploy neural network models on accelerators. In this paper, we present a novel optical-domain BNN accelerator, named ROBIN , which intelligently integrates heterogeneous microring resonator optical devices with complementary capabilities to efficiently implement the key functionalities in BNNs. We perform detailed fabrication-process variation analyses at the optical device level, explore efficient corrective tuning for these devices, and integrate circuit-level optimization to counter thermal variations. As a result, our proposed ROBIN architecture possesses the desirable traits of being robust, energy-efficient, low latency, and high throughput, when executing BNN models. Our analysis shows that ROBIN can outperform the best-known optical BNN accelerators and many electronic accelerators. Specifically, our energy-efficient ROBIN design exhibits energy-per-bit values that are ∼4 × lower than electronic BNN accelerators and ∼933 × lower than a recently proposed photonic BNN accelerator, while a performance-efficient ROBIN design shows ∼3 × and ∼25 × better performance than electronic and photonic BNN accelerators, respectively.
Style APA, Harvard, Vancouver, ISO itp.
4

Simons, Taylor, i Dah-Jye Lee. "A Review of Binarized Neural Networks". Electronics 8, nr 6 (12.06.2019): 661. http://dx.doi.org/10.3390/electronics8060661.

Pełny tekst źródła
Streszczenie:
In this work, we review Binarized Neural Networks (BNNs). BNNs are deep neural networks that use binary values for activations and weights, instead of full precision values. With binary values, BNNs can execute computations using bitwise operations, which reduces execution time. Model sizes of BNNs are much smaller than their full precision counterparts. While the accuracy of a BNN model is generally less than full precision models, BNNs have been closing accuracy gap and are becoming more accurate on larger datasets like ImageNet. BNNs are also good candidates for deep learning implementations on FPGAs and ASICs due to their bitwise efficiency. We give a tutorial of the general BNN methodology and review various contributions, implementations and applications of BNNs.
Style APA, Harvard, Vancouver, ISO itp.
5

Wang, Peisong, Xiangyu He, Gang Li, Tianli Zhao i Jian Cheng. "Sparsity-Inducing Binarized Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 12192–99. http://dx.doi.org/10.1609/aaai.v34i07.6900.

Pełny tekst źródła
Streszczenie:
Binarization of feature representation is critical for Binarized Neural Networks (BNNs). Currently, sign function is the commonly used method for feature binarization. Although it works well on small datasets, the performance on ImageNet remains unsatisfied. Previous methods mainly focus on minimizing quantization error, improving the training strategies and decomposing each convolution layer into several binary convolution modules. However, whether sign is the only option for binarization has been largely overlooked. In this work, we propose the Sparsity-inducing Binarized Neural Network (Si-BNN), to quantize the activations to be either 0 or +1, which introduces sparsity into binary representation. We further introduce trainable thresholds into the backward function of binarization to guide the gradient propagation. Our method dramatically outperforms current state-of-the-arts, lowering the performance gap between full-precision networks and BNNs on mainstream architectures, achieving the new state-of-the-art on binarized AlexNet (Top-1 50.5%), ResNet-18 (Top-1 59.7%), and VGG-Net (Top-1 63.2%). At inference time, Si-BNN still enjoys the high efficiency of exclusive-not-or (xnor) operations.
Style APA, Harvard, Vancouver, ISO itp.
6

Liu, Chunlei, Peng Chen, Bohan Zhuang, Chunhua Shen, Baochang Zhang i Wenrui Ding. "SA-BNN: State-Aware Binary Neural Network". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 3 (18.05.2021): 2091–99. http://dx.doi.org/10.1609/aaai.v35i3.16306.

Pełny tekst źródła
Streszczenie:
Binary Neural Networks (BNNs) have received significant attention due to the memory and computation efficiency recently. However, the considerable accuracy gap between BNNs and their full-precision counterparts hinders BNNs to be deployed to resource-constrained platforms. One of the main reasons for the performance gap can be attributed to the frequent weight flip, which is caused by the misleading weight update in BNNs. To address this issue, we propose a state-aware binary neural network (SA-BNN) equipped with the well designed state-aware gradient. Our SA-BNN is inspired by the observation that the frequent weight flip is more likely to occur, when the gradient magnitude for all quantization states {-1,1} is identical. Accordingly, we propose to employ independent gradient coefficients for different states when updating the weights. Furthermore, we also analyze the effectiveness of the state-aware gradient on suppressing the frequent weight flip problem. Experiments on ImageNet show that the proposed SA-BNN outperforms the current state-of-the-arts (e.g., Bi-Real Net) by more than 3% when using a ResNet architecture. Specifically, we achieve 61.7%, 65.5% and 68.7% Top-1 accuracy with ResNet-18, ResNet-34 and ResNet-50 on ImageNet, respectively.
Style APA, Harvard, Vancouver, ISO itp.
7

Zhao, Yiyang, Yongjia Wang, Ruibo Wang, Yuan Rong i Xianyang Jiang. "A Highly Robust Binary Neural Network Inference Accelerator Based on Binary Memristors". Electronics 10, nr 21 (25.10.2021): 2600. http://dx.doi.org/10.3390/electronics10212600.

Pełny tekst źródła
Streszczenie:
Since memristor was found, it has shown great application potential in neuromorphic computing. Currently, most neural networks based on memristors deploy the special analog characteristics of memristor. However, owing to the limitation of manufacturing process, non-ideal characteristics such as non-linearity, asymmetry, and inconsistent device periodicity appear frequently and definitely, therefore, it is a challenge to employ memristor in a massive way. On the contrary, a binary neural network (BNN) requires its weights to be either +1 or −1, which can be mapped by digital memristors with high technical maturity. Upon this, a highly robust BNN inference accelerator with binary sigmoid activation function is proposed. In the accelerator, the inputs of each network layer are either +1 or 0, which can facilitate feature encoding and reduce the peripheral circuit complexity of memristor hardware. The proposed two-column reference memristor structure together with current controlled voltage source (CCVS) circuit not only solves the problem of mapping positive and negative weights on memristor array, but also eliminates the sneak current effect under the minimum conductance status. Being compared to the traditional differential pair structure of BNN, the proposed two-column reference scheme can reduce both the number of memristors and the latency to refresh the memristor array by nearly 50%. The influence of non-ideal factors of memristor array such as memristor array yield, memristor conductance fluctuation, and reading noise on the accuracy of BNN is investigated in detail based on a newly memristor circuit model with non-ideal characteristics. The experimental results demonstrate that when the array yield α ≥ 5%, or the reading noise σ ≤ 0.25, a recognition accuracy greater than 97% on the MNIST data set is achieved.
Style APA, Harvard, Vancouver, ISO itp.
8

Xiang, Maoyang, i Tee Hui Teo. "Implementation of Binarized Neural Networks in All-Programmable System-on-Chip Platforms". Electronics 11, nr 4 (21.02.2022): 663. http://dx.doi.org/10.3390/electronics11040663.

Pełny tekst źródła
Streszczenie:
The Binarized Neural Network (BNN) is a Convolutional Neural Network (CNN) consisting of binary weights and activation rather than real-value weights. Smaller models are used, allowing for inference effectively on mobile or embedded devices with limited power and computing capabilities. Nevertheless, binarization results in lower-entropy feature maps and gradient vanishing, which leads to a loss in accuracy compared to real-value networks. Previous research has addressed these issues with various approaches. However, those approaches significantly increase the algorithm’s time and space complexity, which puts a heavy burden on those embedded devices. Therefore, a novel approach for BNN implementation on embedded systems with multi-scale BNN topology is proposed in this paper, from two optimization perspectives: hardware structure and BNN topology, that retains more low-level features throughout the feed-forward process with few operations. Experiments on the CIFAR-10 dataset indicate that the proposed method outperforms a number of current BNN designs in terms of efficiency and accuracy. Additionally, the proposed BNN was implemented on the All Programmable System on Chip (APSoC) with 4.4 W power consumption using the hardware accelerator.
Style APA, Harvard, Vancouver, ISO itp.
9

Zhang, Longlong, Xuebin Tang, Xiang Hu, Tong Zhou i Yuanxi Peng. "FPGA-Based BNN Architecture in Time Domain with Low Storage and Power Consumption". Electronics 11, nr 9 (28.04.2022): 1421. http://dx.doi.org/10.3390/electronics11091421.

Pełny tekst źródła
Streszczenie:
With the increasing demand for convolutional neural networks (CNNs) in many edge computing scenarios and resource-limited settings, researchers have made efforts to apply lightweight neural networks on hardware platforms. While binarized neural networks (BNNs) perform excellently in such tasks, many implementations still face challenges such as an imbalance between accuracy and computational complexity, as well as the requirement for low power and storage consumption. This paper first proposes a novel binary convolution structure based on the time domain to reduce resource and power consumption for the convolution process. Furthermore, through the joint design of binary convolution, batch normalization, and activation function in the time domain, we propose a full-BNN model and hardware architecture (Model I), which keeps the values of all intermediate results as binary (1 bit) to reduce storage requirements by 75%. At the same time, we propose a mixed-precision BNN structure (model II) based on the sensitivity of different layers of the network to the calculation accuracy; that is, the layer sensitive to the classification result uses fixed-point data, and the other layers use binary data in the time domain. This can achieve a balance between accuracy and computing resources. Lastly, we take the MNIST dataset as an example to test the above two models on the field-programmable gate array (FPGA) platform. The results show that the two models can be used as neural network acceleration units with low storage requirements and low power consumption for classification tasks under the condition that the accuracy decline is small. The joint design method in the time domain may further inspire other computing architectures. In addition, the design of Model II has certain reference significance for the design of more complex classification tasks.
Style APA, Harvard, Vancouver, ISO itp.
10

Kim, HyunJin, Mohammed Alnemari i Nader Bagherzadeh. "A storage-efficient ensemble classification using filter sharing on binarized convolutional neural networks". PeerJ Computer Science 8 (29.03.2022): e924. http://dx.doi.org/10.7717/peerj-cs.924.

Pełny tekst źródła
Streszczenie:
This paper proposes a storage-efficient ensemble classification to overcome the low inference accuracy of binary neural networks (BNNs). When external power is enough in a dynamic powered system, classification results can be enhanced by aggregating outputs of multiple BNN classifiers. However, memory requirements for storing multiple classifiers are a significant burden in the lightweight system. The proposed scheme shares the filters from a trained convolutional neural network (CNN) model to reduce storage requirements in the binarized CNNs instead of adopting the fully independent classifier. While several filters are shared, the proposed method only trains unfrozen learnable parameters in the retraining step. We compare and analyze the performances of the proposed ensemble-based systems depending on various ensemble types and BNN structures on CIFAR datasets. Our experiments conclude that the proposed method using the filter sharing can be scalable with the number of classifiers and effective in enhancing classification accuracy. With binarized ResNet-20 and ReActNet-10 on the CIFAR-100 dataset, the proposed scheme can achieve 56.74% and 70.29% Top-1 accuracies with 10 BNN classifiers, which enhances performance by 7.6% and 3.6% compared with that using a single BNN classifier.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Binary neural networks (BNN)"

1

Simons, Taylor Scott. "High-Speed Image Classification for Resource-Limited Systems Using Binary Values". BYU ScholarsArchive, 2021. https://scholarsarchive.byu.edu/etd/9097.

Pełny tekst źródła
Streszczenie:
Image classification is a memory- and compute-intensive task. It is difficult to implement high-speed image classification algorithms on resource-limited systems like FPGAs and embedded computers. Most image classification algorithms require many fixed- and/or floating-point operations and values. In this work, we explore the use of binary values to reduce the memory and compute requirements of image classification algorithms. Our objective was to implement these algorithms on resource-limited systems while maintaining comparable accuracy and high speeds. By implementing high-speed image classification algorithms on resource-limited systems like embedded computers, FPGAs, and ASICs, automated visual inspection can be performed on small low-powered systems. Industries like manufacturing, medicine, and agriculture can benefit from compact, high-speed, low-power visual inspection systems. Tasks like defect detection in manufactured products and quality sorting of harvested produce can be performed cheaper and more quickly. In this work, we present ECO Jet Features, an algorithm adapted to use binary values for visual inspection. The ECO Jet Features algorithm ran 3.7x faster than the original ECO Features algorithm on embedded computers. It also allowed the algorithm to be implemented on an FPGA, achieving 78x speedup over full-sized desktop systems, using a fraction of the power and space. We reviewed Binarized Neural Nets (BNNs), neural networks that use binary values for weights and activations. These networks are particularly well suited for FPGA implementation and we compared and contrasted various FPGA implementations found throughout the literature. Finally, we combined the deep learning methods used in BNNs with the efficiency of Jet Features to make Neural Jet Features. Neural Jet Features are binarized convolutional layers that are learned through deep learning and learn classic computer vision kernels like the Gaussian and Sobel kernels. These kernels are efficiently computed as a group and their outputs can be reused when forming output channels. They performed just as well as BNN convolutions on visual inspection tasks and are more stable when trained on small models.
Style APA, Harvard, Vancouver, ISO itp.
2

Braga, Antônio de Pádua. "Design models for recursive binary neural networks". Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336442.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Redkar, Shrutika. "Deep Learning Binary Neural Network on an FPGA". Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/407.

Pełny tekst źródła
Streszczenie:
In recent years, deep neural networks have attracted lots of attentions in the field of computer vision and artificial intelligence. Convolutional neural network exploits spatial correlations in an input image by performing convolution operations in local receptive fields. When compared with fully connected neural networks, convolutional neural networks have fewer weights and are faster to train. Many research works have been conducted to further reduce computational complexity and memory requirements of convolutional neural networks, to make it applicable to low-power embedded applications. This thesis focuses on a special class of convolutional neural network with only binary weights and activations, referred as binary neural networks. Weights and activations for convolutional and fully connected layers are binarized to take only two values, +1 and -1. Therefore, the computations and memory requirement have been reduced significantly. The proposed architecture of binary neural networks has been implemented on an FPGA as a real time, high speed, low power computer vision platform. Only on-chip memories are utilized in the FPGA design. The FPGA implementation is evaluated using the CIFAR-10 benchmark and achieved a processing speed of 332,164 images per second for CIFAR-10 dataset with classification accuracy of about 86.06%.
Style APA, Harvard, Vancouver, ISO itp.
4

Ezzadeen, Mona. "Conception d'un circuit dédié au calcul dans la mémoire à base de technologie 3D innovante". Electronic Thesis or Diss., Aix-Marseille, 2022. http://theses.univ-amu.fr.lama.univ-amu.fr/221212_EZZADEEN_955e754k888gvxorp699jljcho_TH.pdf.

Pełny tekst źródła
Streszczenie:
Avec le développement de l'internet des objets et de l'intelligence artificielle, le "déluge de données" est une réalité, poussant au développement de systèmes de calcul efficaces énergétiquement. Dans ce contexte, en effectuant le calcul directement à l'intérieur ou à proximité des mémoires, le paradigme de l'in/near-memory-computing (I/NMC) semble être une voie prometteuse. En effet, les transferts de données entre les mémoires et les unités de calcul sont très énergivores. Cependant, les classiques mémoires Flash souffrent de problèmes de miniaturisation et ne semblent pas facilement adaptées à l'I/NMC. Ceci n'est pas le cas de nouvelles technologies mémoires émergentes comme les ReRAM. Ces dernières souffrent cependant d'une variabilité importante, et nécessitent l'utilisation d'un transistor d'accès par bit (1T1R) pour limiter les courants de fuite, dégradant ainsi leur densité. Dans cette thèse, nous nous proposons de résoudre ces deux défis. Tout d'abord, l'impact de la variabilité des ReRAM sur les opérations de lecture et de calcul en mémoire est étudié, et de nouvelles techniques de calculs booléens robustes et à faible impact surfacique sont développées. Dans le contexte des réseaux de neurones, de nouveaux accélérateurs neuromorphiques à base de ReRAM sont proposés et caractérisés, visant une bonne robustesse face à la variabilité, un bon parallélisme et une efficacité énergétique élevée. Dans un deuxième temps, pour résoudre les problèmes de densité d'intégration, une nouvelle technologie de cube mémoire 3D à base de ReRAM 1T1R est proposée, pouvant à la fois être utilisée en tant que mémoire de type NOR 3D dense qu'en tant qu'accélérateur pour l'I/NMC
With the advent of edge devices and artificial intelligence, the data deluge is a reality, making energy-efficient computing systems a must-have. Unfortunately, classical von Neumann architectures suffer from the high cost of data transfers between memories and processing units. At the same time, CMOS scaling seems more and more challenging and costly to afford, limiting the chips' performance due to power consumption issues.In this context, bringing the computation directly inside or near memories (I/NMC) seems an appealing solution. However, data-centric applications require an important amount of non-volatile storage, and modern Flash memories suffer from scaling issues and are not very suited for I/NMC. On the other hand, emerging memory technologies such as ReRAM present very appealing memory performances, good scalability, and interesting I/NMC features. However, they suffer from variability issues and from a degraded density integration if an access transistor per bitcell (1T1R) is used to limit the sneak-path currents. This thesis work aims to overcome these two challenges. First, the variability impact on read and I/NMC operations is assessed and new robust and low-overhead ReRAM-based boolean operations are proposed. In the context of neural networks, new ReRAM-based neuromorphic accelerators are developed and characterized, with an emphasis on good robustness against variability, good parallelism, and high energy efficiency. Second, to resolve the density integration issues, an ultra-dense 3D 1T1R ReRAM-based Cube and its architecture are proposed, which can be used as a 3D NOR memory as well as a low overhead and energy-efficient I/NMC accelerator
Style APA, Harvard, Vancouver, ISO itp.
5

Kennedy, John V. "The design of a scalable and application independent platform for binary neural networks". Thesis, University of York, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323503.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Li, Guo. "Neural network for optimization of binary computer-generated hologram with printing model /". Online version of thesis, 1995. http://hdl.handle.net/1850/12234.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Medvedieva, S. O., I. V. Bogach, V. A. Kovenko, С. О. Медведєва, І. В. Богач i В. А. Ковенко. "Neural networks in Machine learning". Thesis, ВНТУ, 2019. http://ir.lib.vntu.edu.ua//handle/123456789/24788.

Pełny tekst źródła
Streszczenie:
В статті розглянуті основи роботи з нейронними мережами, особливу увагу приділено моделі мережі під назвою «перцептрон», запровадженої Френком Розенблаттом. До того ж було розкрито тему найпоширеніших мов програмування, що дозволяють втілити нейронні мережі у життя, шляхом створення програмного забезпечення, пов`язаного з ними.
The paper covers the basic principles of Neural Networks’ work. Special attention is paid to Frank Rosenblatt’s model of the network called “perceptron”. In addition, the article touches upon the main programming languages used to write software for Neural Networks.
Style APA, Harvard, Vancouver, ISO itp.
8

Wilson, Brittany Michelle. "Evaluating and Improving the SEU Reliability of Artificial Neural Networks Implemented in SRAM-Based FPGAs with TMR". BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8619.

Pełny tekst źródła
Streszczenie:
Artificial neural networks (ANNs) are used in many types of computing applications. Traditionally, ANNs have been implemented in software, executing on CPUs and even GPUs, which capitalize on the parallelizable nature of ANNs. More recently, FPGAs have become a target platform for ANN implementations due to their relatively low cost, low power, and flexibility. Some safety-critical applications could benefit from ANNs, but these applications require a certain level of reliability. SRAM-based FPGAs are sensitive to single-event upsets (SEUs), which can lead to faults and errors in execution. However there are techniques that can mask such SEUs and thereby improve the overall design reliability. This thesis evaluates the SEU reliability of neural networks implemented in SRAM-based FPGAs and investigates mitigation techniques against upsets for two case studies. The first was based on the LeNet-5 convolutional neural network and was used to test an implementation with both fault injection and neutron radiation experiments, demonstrating that our fault injection experiments could accurately evaluate SEU reliability of the networks. SEU reliability was improved by selectively applying TMR to the most critical layers of the design, achieving a 35% improvement reliability at an increase in 6.6% resources. The second was an existing neural network called BNN-PYNQ. While the base design was more sensitive to upsets than the CNN previous tested, the TMR technique improved the reliability by approximately 7× in fault injection experiments.
Style APA, Harvard, Vancouver, ISO itp.
9

Mealey, Thomas C. "Binary Recurrent Unit: Using FPGA Hardware to Accelerate Inference in Long Short-Term Memory Neural Networks". University of Dayton / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1524402925375566.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Strandberg, Rickard, i Johan Låås. "A comparison between Neural networks, Lasso regularized Logistic regression, and Gradient boosted trees in modeling binary sales". Thesis, KTH, Optimeringslära och systemteori, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252556.

Pełny tekst źródła
Streszczenie:
The primary purpose of this thesis is to predict whether or not a customer will make a purchase from a specific item category. The historical data is provided by the Nordic online-based IT-retailer Dustin. The secondary purpose is to evaluate how well a fully connected feed forward neural network performs as compared to Lasso regularized logistic regression and gradient boosted trees (XGBoost) on this task. This thesis finds XGBoost to be superior to the two other methods in terms of prediction accuracy, as well as speed.
Det primära syftet med denna uppsats är att förutsäga huruvida en kund kommer köpa en specifik produkt eller ej. Den historiska datan tillhandahålls av den Nordiska internet-baserade IT-försäljaren Dustin. Det sekundära syftet med uppsatsen är att evaluera hur väl ett djupt neuralt nätverk presterar jämfört med Lasso regulariserad logistisk regression och gradient boostade träd (GXBoost). Denna uppsats fann att XGBoost presterade bättre än de två andra metoderna i såväl träffsäkerhet, som i hastighet.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Binary neural networks (BNN)"

1

Baram, Yoram. Orthogonal patterns in binary neural networks. Moffett Field, Calif: National Aeronautics and Space Administration, Ames Research Center, 1988.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Aizenberg, Igor N. Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications. Boston, MA: Springer US, 2000.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Aizenberg, Igor, Naum N. Aizenberg i Joos P. L. Vandewalle. Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications. Springer, 2000.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Li, Jian, Antonio De Maio, Guolong Cui i Alfonso Farina. Radar Waveform Design Based on Optimization Theory. Institution of Engineering & Technology, 2020.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Radar Waveform Design Based on Optimization Theory. Institution of Engineering & Technology, 2020.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Binary neural networks (BNN)"

1

Zhang, Yedi, Zhe Zhao, Guangke Chen, Fu Song i Taolue Chen. "BDD4BNN: A BDD-Based Quantitative Analysis Framework for Binarized Neural Networks". W Computer Aided Verification, 175–200. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81685-8_8.

Pełny tekst źródła
Streszczenie:
AbstractVerifying and explaining the behavior of neural networks is becoming increasingly important, especially when they are deployed in safety-critical applications. In this paper, we study verification and interpretability problems for Binarized Neural Networks (BNNs), the 1-bit quantization of general real-numbered neural networks. Our approach is to encode BNNs into Binary Decision Diagrams (BDDs), which is done by exploiting the internal structure of the BNNs. In particular, we translate the input-output relation of blocks in BNNs to cardinality constraints which are in turn encoded by BDDs. Based on the encoding, we develop a quantitative framework for BNNs where precise and comprehensive analysis of BNNs can be performed. We demonstrate the application of our framework by providing quantitative robustness analysis and interpretability for BNNs. We implement a prototype tool and carry out extensive experiments, confirming the effectiveness and efficiency of our approach.
Style APA, Harvard, Vancouver, ISO itp.
2

Myojin, Tomoyuki, Shintaro Hashimoto i Naoki Ishihama. "Detecting Uncertain BNN Outputs on FPGA Using Monte Carlo Dropout Sampling". W Artificial Neural Networks and Machine Learning – ICANN 2020, 27–38. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61616-8_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Hodge, Victoria J., i Jim Austin. "A Novel Binary Spell Checker". W Artificial Neural Networks — ICANN 2001, 1199–204. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44668-0_167.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Karandashev, Yakov, i Boris Kryzhanovsky. "Binary Minimization: Increasing the Attraction Area of the Global Minimum in the Binary Optimization Problem". W Artificial Neural Networks – ICANN 2010, 525–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15822-3_64.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Skubiszewski, Marcin. "A Hardware Emulator for Binary Neural Networks". W International Neural Network Conference, 555–58. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Makita, Kazuma, Takahiro Ozawa i Toshimichi Saito. "Basic Analysis of Cellular Dynamic Binary Neural Networks". W Neural Information Processing, 779–86. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70090-8_79.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Anzai, Shota, Seitaro Koyama, Shunsuke Aoki i Toshimichi Saito. "Sparse Dynamic Binary Neural Networks for Storage and Switching of Binary Periodic Orbits". W Neural Information Processing, 536–42. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36711-4_45.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Cotofana, Sorin, i Stamatis Vassiliadis. "Serial binary addition with polynormally bounded weights". W Artificial Neural Networks — ICANN 96, 741–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61510-5_125.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Kanerva, Pentti. "Binary spatter-coding of ordered K-tuples". W Artificial Neural Networks — ICANN 96, 869–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61510-5_146.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Tu, Zhijun, Xinghao Chen, Pengju Ren i Yunhe Wang. "AdaBin: Improving Binary Neural Networks with Adaptive Binary Sets". W Lecture Notes in Computer Science, 379–95. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20083-0_23.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Binary neural networks (BNN)"

1

Zhao, Junhe, Linlin Yang, Baochang Zhang, Guodong Guo i David Doermann. "Uncertainty-aware Binary Neural Networks". W Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/474.

Pełny tekst źródła
Streszczenie:
Binary Neural Networks (BNN) are promising machine learning solutions for deployment on resource-limited devices. Recent approaches to training BNNs have produced impressive results, but minimizing the drop in accuracy from full precision networks is still challenging. One reason is that conventional BNNs ignore the uncertainty caused by weights that are near zero, resulting in the instability or frequent flip while learning. In this work, we investigate the intrinsic uncertainty of vanishing near-zero weights, making the training vulnerable to instability. We introduce an uncertainty-aware BNN (UaBNN) by leveraging a new mapping function called certainty-sign (c-sign) to reduce these weights' uncertainties. Our c-sign function is the first to train BNNs with a decreasing uncertainty for binarization. The approach leads to a controlled learning process for BNNs. We also introduce a simple but effective method to measure the uncertainty-based on a Gaussian function. Extensive experiments demonstrate that our method improves multiple BNN methods by maintaining stability of training, and achieves a higher performance over prior arts.
Style APA, Harvard, Vancouver, ISO itp.
2

Chen, Tianlong, Zhenyu Zhang, Xu Ouyang, Zechun Liu, Zhiqiang Shen i Zhangyang Wang. "“BNN - BN = ?”: Training Binary Neural Networks without Batch Normalization". W 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2021. http://dx.doi.org/10.1109/cvprw53098.2021.00520.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Li, Yixing, i Fengbo Ren. "BNN Pruning: Pruning Binary Neural Network Guided by Weight Flipping Frequency". W 2020 21st International Symposium on Quality Electronic Design (ISQED). IEEE, 2020. http://dx.doi.org/10.1109/isqed48828.2020.9136977.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Suarez-Ramirez, Cuauhtemoc, Miguel Gonzalez-Mendoza, Leonardo Chang, Gilberto Ochoa-Ruiz i Mario Duran-Vega. "A Bop and Beyond: A Second Order Optimizer for Binarized Neural Networks". W LatinX in AI at Computer Vision and Pattern Recognition Conference 2021. Journal of LatinX in AI Research, 2021. http://dx.doi.org/10.52591/lxai202106255.

Pełny tekst źródła
Streszczenie:
The optimization of Binary Neural Networks (BNNs) relies on approximating the real-valued weights with their binarized representations. Current techniques for weight-updating use the same approaches as traditional Neural Networks (NNs) with the extra requirement of using an approximation to the derivative of the sign function - as it is the Dirac-Delta function - for back-propagation; thus, efforts are focused adapting full-precision techniques to work on BNNs. In the literature, only one previous effort has tackled the problem of directly training the BNNs with bit-flips by using the first raw moment estimate of the gradients and comparing it against a threshold for deciding when to flip a weight (Bop). In this paper, we take an approach parallel to Adam which also uses the second raw moment estimate to normalize the first raw moment before doing the comparison with the threshold, we call this method Bop2ndOrder. We present two versions of the proposed optimizer: a biased one and a bias-corrected one, each with its own applications. Also, we present a complete ablation study of the hyperparameters space, as well as the effect of using schedulers on each of them. For these studies, we tested the optimizer in CIFAR10 using the BinaryNet architecture. Also, we tested it in ImageNet 2012 with the XnorNet and BiRealNet architectures for accuracy. In both datasets our approach proved to converge faster, was robust to changes of the hyperparameters, and achieved better accuracy values.
Style APA, Harvard, Vancouver, ISO itp.
5

Say, Buser, i Scott Sanner. "Planning in Factored State and Action Spaces with Learned Binarized Neural Network Transition Models". W Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/669.

Pełny tekst źródła
Streszczenie:
In this paper, we leverage the efficiency of Binarized Neural Networks (BNNs) to learn complex state transition models of planning domains with discretized factored state and action spaces. In order to directly exploit this transition structure for planning, we present two novel compilations of the learned factored planning problem with BNNs based on reductions to Boolean Satisfiability (FD-SAT-Plan) as well as Binary Linear Programming (FD-BLP-Plan). Experimentally, we show the effectiveness of learning complex transition models with BNNs, and test the runtime efficiency of both encodings on the learned factored planning problem. After this initial investigation, we present an incremental constraint generation algorithm based on generalized landmark constraints to improve the planning accuracy of our encodings. Finally, we show how to extend the best performing encoding (FD-BLP-Plan+) beyond goals to handle factored planning problems with rewards.
Style APA, Harvard, Vancouver, ISO itp.
6

Cardelli, Luca, Marta Kwiatkowska, Luca Laurenti, Nicola Paoletti, Andrea Patane i Matthew Wicker. "Statistical Guarantees for the Robustness of Bayesian Neural Networks". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/789.

Pełny tekst źródła
Streszczenie:
We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction differs between the two. Such a measure can be used, for instance, to quantify the probability of the existence of adversarial examples. Building on statistical verification techniques for probabilistic models, we develop a framework that allows us to estimate probabilistic robustness for a BNN with statistical guarantees, i.e., with a priori error and confidence bounds. We provide experimental comparison for several approximate BNN inference techniques on image classification tasks associated to MNIST and a two-class subset of the GTSRB dataset. Our results enable quantification of uncertainty of BNN predictions in adversarial settings.
Style APA, Harvard, Vancouver, ISO itp.
7

Liu, Haihui, Yuang Zhang, Xiangwei Zheng, Mengxin Yu, Baoxu An i Xiaojie Liu. "PC-BNA: Parallel Convolution Binary Neural Network Accelerator". W 2022 IEEE 24th Int Conf on High Performance Computing & Communications; 8th Int Conf on Data Science & Systems; 20th Int Conf on Smart City; 8th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys). IEEE, 2022. http://dx.doi.org/10.1109/hpcc-dss-smartcity-dependsys57074.2022.00169.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Magnitskii, N. A., i A. A. Mikhailov. "Binary neural networks". W Optical Information Science and Technology, redaktor Andrei L. Mikaelian. SPIE, 1998. http://dx.doi.org/10.1117/12.304957.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Heilala, Janne, Paavo Nevalainen i Kristiina Toivonen. "AI-based sentiment analysis approaches for large-scale data domains of public and security interests". W 14th International Conference on Applied Human Factors and Ergonomics (AHFE 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1003738.

Pełny tekst źródła
Streszczenie:
Organizational service learn-leadership design for adapting and predicting machine learning-based sentiments of sociotechnical systems is being addressed in segmenting textual-producing agents in classes. In the past, there have been numerous demonstrations in different language models (LMs) and (naıve) Bayesian Networks (BN) that can classify textual knowledge origin for different classes based on decisive binary trees from the future prediction aspect of how public text collection and processing can be approached, converging the root causes of events. An example is how communication influence and affect the end-user. Within service providers and industry, the progress of processing communication relies on formal clinical and informal non-practices. The LM is based on handcrafted division on machine learning (ML) approaches representing the subset of AI and can be used as an orthogonal policy-as-a-target leadership tool in customer or political discussions. The classifiers which use the numeric representation of textual information are classified in a Neural Network (NN) by characterizing, for instance, the communication using cross-sectional analysis methods. The textual form of reality collected in the databases has significant processable value-adding opportunities in different management and leadership, education, and climate control sectors. The data can be used cautiously for establishing and maintaining new and current business operations and innovations. There is currently a lack of understanding of how to use most NN and DN methods. The operations and innovations management and leadership support the flow of communication for effectiveness and quality.
Style APA, Harvard, Vancouver, ISO itp.
10

Huang, Kuan-Yu, Jettae Schroff, Cheng-Di Tsai i Tsung-Chu Huang. "2DAN-BNN: Two-Dimensional AN-Code Decoders for Binarized Neural Networks". W 2022 IEEE International Conference on Consumer Electronics - Taiwan. IEEE, 2022. http://dx.doi.org/10.1109/icce-taiwan55306.2022.9868989.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Binary neural networks (BNN)"

1

Farhi, Edward, i Hartmut Neven. Classification with Quantum Neural Networks on Near Term Processors. Web of Open Science, grudzień 2020. http://dx.doi.org/10.37686/qrl.v1i2.80.

Pełny tekst źródła
Streszczenie:
We introduce a quantum neural network, QNN, that can represent labeled data, classical or quantum, and be trained by supervised learning. The quantum circuit consists of a sequence of parameter dependent unitary transformations which acts on an input quantum state. For binary classification a single Pauli operator is measured on a designated readout qubit. The measured output is the quantum neural network’s predictor of the binary label of the input state. We show through classical simulation that parameters can be found that allow the QNN to learn to correctly distinguish the two data sets. We then discuss presenting the data as quantum superpositions of computational basis states corresponding to different label values. Here we show through simulation that learning is possible. We consider using our QNN to learn the label of a general quantum state. By example we show that this can be done. Our work is exploratory and relies on the classical simulation of small quantum systems. The QNN proposed here was designed with near-term quantum processors in mind. Therefore it will be possible to run this QNN on a near term gate model quantum computer where its power can be explored beyond what can be explored with simulation.
Style APA, Harvard, Vancouver, ISO itp.
2

Denysenko, Oleksii. Solving binary classification problem by means of convolutional neural networks with the use of TensorFlow framework. Intellectual Archive, marzec 2019. http://dx.doi.org/10.32370/online/2019_03_25_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii