Добірка наукової літератури з теми "Fixed neural network"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Fixed neural network".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Fixed neural network"
Wang, Guanzheng, Rubin Wang, Wanzeng Kong, and Jianhai Zhang. "The Relationship between Sparseness and Energy Consumption of Neural Networks." Neural Plasticity 2020 (November 25, 2020): 1–13. http://dx.doi.org/10.1155/2020/8848901.
Повний текст джерелаThatoi, Dhirendranath, and Prabir Kumar Jena. "Inverse Analysis of Crack in Fixed-Fixed Structure by Neural Network with the Aid of Modal Analysis." Advances in Artificial Neural Systems 2013 (March 3, 2013): 1–8. http://dx.doi.org/10.1155/2013/150209.
Повний текст джерелаZhang, Jianhua, Yang Li, and Wenbo Fei. "Neural Network-Based Nonlinear Fixed-Time Adaptive Practical Tracking Control for Quadrotor Unmanned Aerial Vehicles." Complexity 2020 (September 26, 2020): 1–13. http://dx.doi.org/10.1155/2020/8828453.
Повний текст джерелаPan, Tetie, Bao Shi, and Jian Yuan. "Global Stability of Almost Periodic Solution of a Class of Neutral-Type BAM Neural Networks." Abstract and Applied Analysis 2012 (2012): 1–18. http://dx.doi.org/10.1155/2012/482584.
Повний текст джерелаCotter, Neil E., and Peter R. Conwell. "Universal Approximation by Phase Series and Fixed-Weight Networks." Neural Computation 5, no. 3 (May 1993): 359–62. http://dx.doi.org/10.1162/neco.1993.5.3.359.
Повний текст джерелаScellier, Benjamin, and Yoshua Bengio. "Equivalence of Equilibrium Propagation and Recurrent Backpropagation." Neural Computation 31, no. 2 (February 2019): 312–29. http://dx.doi.org/10.1162/neco_a_01160.
Повний текст джерелаGuo, Chengjun, Donal O’Regan, Feiqi Deng, and Ravi P. Agarwal. "Fixed points and exponential stability for a stochastic neutral cellular neural network." Applied Mathematics Letters 26, no. 8 (August 2013): 849–53. http://dx.doi.org/10.1016/j.aml.2013.03.011.
Повний текст джерелаWhite, Robert L., and Lawrence H. Snyder. "A Neural Network Model of Flexible Spatial Updating." Journal of Neurophysiology 91, no. 4 (April 2004): 1608–19. http://dx.doi.org/10.1152/jn.00277.2003.
Повний текст джерелаLi, Yang, Jianhua Zhang, Xinli Xu, and Cheng Siong Chin. "Adaptive Fixed-Time Neural Network Tracking Control of Nonlinear Interconnected Systems." Entropy 23, no. 9 (September 1, 2021): 1152. http://dx.doi.org/10.3390/e23091152.
Повний текст джерелаSergeev, Fedor, Elena Bratkovskaya, Ivan Kisel, and Iouri Vassiliev. "Deep learning for quark–gluon plasma detection in the CBM experiment." International Journal of Modern Physics A 35, no. 33 (November 30, 2020): 2043002. http://dx.doi.org/10.1142/s0217751x20430022.
Повний текст джерелаДисертації з теми "Fixed neural network"
Puttige, Vishwas Ramadas Engineering & Information Technology Australian Defence Force Academy UNSW. "Neural network based adaptive control for autonomous flight of fixed wing unmanned aerial vehicles." Awarded by:University of New South Wales - Australian Defence Force Academy. Engineering & Information Technology, 2009. http://handle.unsw.edu.au/1959.4/43736.
Повний текст джерелаHao, Haiyan. "Understanding Fixed Object Crashes with SHRP2 Naturalistic Driving Study Data." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/84942.
Повний текст джерелаMaster of Science
Rava, Eleonora Maria Elizabeth. "Bioaugmentation of coal gasification stripped gas liquor wastewater in a hybrid fixed-film bioreactor." Thesis, University of Pretoria, 2017. http://hdl.handle.net/2263/62789.
Повний текст джерелаThesis (PhD)--University of Pretoria, 2017.
Chemical Engineering
PhD
Unrestricted
Коломієць, Ольга Вікторівна. "Телеграм-бот для класифікації зображень твердих побутових відходів". Магістерська робота, Хмельницький національний університет, 2020. http://elar.khnu.km.ua/jspui/handle/123456789/9374.
Повний текст джерелаSilva, Carlos Alberto de Albuquerque. "Contribui??o para o estudo do embarque de uma rede neural artificial em field programmable gate array (FPGA)." Universidade Federal do Rio Grande do Norte, 2010. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15340.
Повний текст джерелаThis study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing
Este estudo consiste na implementa??o e no embarque de uma Rede Neural Artificial (RNA) em hardware, ou seja, em um dispositivo program?vel do tipo field programmable gate array (FPGA). O presente trabalho permitiu a explora??o de diferentes implementa??es, descritas em VHDL, de RNA do tipo perceptrons de m?ltiplas camadas. Por causa do paralelismo inerente ?s RNAs, ocorrem desvantagens nas implementa??es em software, devido ? natureza sequencial das arquiteturas de Von Neumann. Como alternativa a este problema, surge uma implementa??o em hardware que permite explorar todo o paralelismo impl?cito neste modelo. Atualmente, verifica-se um aumento no uso do FPGA como plataforma para implementar as Redes Neurais Artificiais em hardware, explorando o alto poder de processamento, o baixo custo, a facilidade de programa??o e capacidade de reconfigura??o do circuito, permitindo que a rede se adapte a diferentes aplica??es. Diante desse contexto, objetivou-se desenvolver arranjos de redes neurais em hardware, em uma arquitetura flex?vel, nas quais fosse poss?vel acrescentar ou retirar neur?nios e, principalmente, modificar a topologia da rede, de forma a viabilizar uma rede modular em aritm?tica de ponto fixo, em um FPGA. Produziram-se cinco s?nteses de descri??es em VHDL: duas para o neur?nio com uma e duas entradas, e tr?s para diferentes arquiteturas de RNA. As descri??es das arquiteturas utilizadas tornaram-se bastante modulares, possibilitando facilmente aumentar ou diminuir o n?mero de neur?nios. Em decorr?ncia disso, algumas redes neurais completas foram implementadas em FPGA, em aritm?tica de ponto fixo e com alta capacidade de processamento paralelo
Rek, Petr. "Knihovna pro návrh konvolučních neuronových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385999.
Повний текст джерелаČermák, Justin. "Implementace umělé neuronové sítě do obvodu FPGA." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219363.
Повний текст джерелаKeller, Paul Edwin. "Fixed planar holographic interconnects for optically implemented neural networks." Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185721.
Повний текст джерелаThai, Shee Meng. "Neural network modelling and control of coal fired boiler plant." Thesis, University of South Wales, 2005. https://pure.southwales.ac.uk/en/studentthesis/neural-network-modelling-and-control-of-coal-fired-boiler-plant(b5562ca0-e45e-44d8-aad2-ed2e3e114808).html.
Повний текст джерелаGaopande, Meghana Laxmidhar. "Exploring Accumulated Gradient-Based Quantization and Compression for Deep Neural Networks." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98617.
Повний текст джерелаMaster of Science
Neural networks are being employed in many different real-world applications. By learning the complex relationship between the input data and ground-truth output data during the training process, neural networks can predict outputs on new input data obtained in real time. To do so, a typical deep neural network often needs millions of numerical parameters, stored in memory. In this research, we explore techniques for reducing the storage requirements for neural network parameters. We propose software methods that convert 32-bit neural network parameters to values that can be stored using fewer bits. Our methods also convert a majority of numerical parameters to zero. Using special storage methods that only require storage of non-zero parameters, we gain significant compression benefits. On typical benchmarks like LeNet-300-100 (MNIST dataset), LeNet-5 (MNIST dataset), AlexNet (CIFAR-10 dataset) and VGG-16 (CIFAR-10 dataset), our methods can achieve up to 57.22x, 50.19x, 13.15x and 13.53x compression respectively. Storage benefits are achieved at the cost of classification accuracy, and we present our work in the light of the accuracy-compression trade-off.
Книги з теми "Fixed neural network"
Chong, A. Z. S. The monitoring and control of stoker-fired boiler plant by neural networks. 1999.
Знайти повний текст джерелаNolte, David D. Introduction to Modern Dynamics. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198844624.001.0001.
Повний текст джерелаЧастини книг з теми "Fixed neural network"
Koh, Bo-Yun, Yung-Wan Kwon, Hyuek-Jae Lee, Soo-Young Lee, and Sang-Yung Shin. "Optical Implementation of Neural Networks with Fixed Global Interconnection and Local Adaptive Gain-Control." In International Neural Network Conference, 611–14. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_22.
Повний текст джерелаLitinsky, Leonid B. "Energy Functional and Fixed Points of a Neural Network." In Neural Nets WIRN VIETRI-97, 153–61. London: Springer London, 1998. http://dx.doi.org/10.1007/978-1-4471-1520-5_10.
Повний текст джерелаSulzbachner, Christoph, Martin Humenberger, Ágoston Srp, and Ferenc Vajda. "Optimization of a Neural Network for Computer Vision Based Fall Detection with Fixed-Point Arithmetic." In Neural Information Processing, 18–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34478-7_3.
Повний текст джерелаGiacobbe, Mirco, Thomas A. Henzinger, and Mathias Lechner. "How Many Bits Does it Take to Quantize Your Neural Network?" In Tools and Algorithms for the Construction and Analysis of Systems, 79–97. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45237-7_5.
Повний текст джерелаGiménez, V., M. Pérez-Castellanos, J. Rios Carrion, and F. de Mingo. "Capacity and parasitic fixed points control in a recursive neural network." In Biological and Artificial Computation: From Neuroscience to Technology, 217–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0032479.
Повний текст джерелаCha, ByungRae, KyungWoo Park, and JaeHyun Seo. "Neural Network Techniques for Host Anomaly Intrusion Detection Using Fixed Pattern Transformation." In Computational Science and Its Applications – ICCSA 2005, 254–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11424826_27.
Повний текст джерелаShan, Lei, Minxuan Zhang, Lin Deng, and Guohui Gong. "A Dynamic Multi-precision Fixed-Point Data Quantization Strategy for Convolutional Neural Network." In Communications in Computer and Information Science, 102–11. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-3159-5_10.
Повний текст джерелаWang, Danyang, Chenghua Li, Benhua Zhang, and Ling Tong. "Prediction of Drying Indices for Paddy Rice in a Deep Fixed-Bed Based on Neural Network." In Computer and Computing Technologies in Agriculture X, 496–507. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-06155-5_51.
Повний текст джерелаDeville, Yannick, and Shahram Hosseini. "Blind Operation of a Recurrent Neural Network for Linear-Quadratic Source Separation: Fixed Points, Stabilization and Adaptation Scheme." In Latent Variable Analysis and Signal Separation, 237–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15995-4_30.
Повний текст джерелаCervantes-Ojeda, Jorge, and María del Carmen Gómez-Fuentes. "On the Connection Weight Space Structure of a Two-Neuron Discrete Neural Network: Bifurcations of the Fixed Point at the Origin." In Nature-Inspired Computation and Machine Learning, 85–94. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13650-9_8.
Повний текст джерелаТези доповідей конференцій з теми "Fixed neural network"
Benmaghnia, Hanane, Matthieu Martel, and Yassamine Seladji. "Fixed-Point Code Synthesis for Neural Networks." In 6th International Conference on Artificial Intelligence, Soft Computing and Applications (AISCA 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120202.
Повний текст джерелаBoo, Yoonho, and Wonyong Sung. "Fixed-Point Optimization of Transformer Neural Network." In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9054724.
Повний текст джерелаDias, Mauricio A., and Fernando S. Osorio. "Fixed-Point Neural Network Ensembles for Visual Navigation." In 2012 Brazilian Robotics Symposium and Latin American Robotics Symposium (SBR-LARS). IEEE, 2012. http://dx.doi.org/10.1109/sbr-lars.2012.57.
Повний текст джерелаWang, Mingyu, Xu Yuan, Bing Chen, Chong Lin, and Yun Shang. "Fixed-time quadrotor trajectory tracking neural network backstepping control." In 2021 33rd Chinese Control and Decision Conference (CCDC). IEEE, 2021. http://dx.doi.org/10.1109/ccdc52312.2021.9602534.
Повний текст джерелаChen, Xi, Xiaolin Hu, Hucheng Zhou, and Ningyi Xu. "FxpNet: Training a deep convolutional neural network in fixed-point representation." In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7966159.
Повний текст джерелаLo, Chun Yan, and Chiu-Wing Sham. "Energy Efficient Fixed-point Inference System of Convolutional Neural Network." In 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS). IEEE, 2020. http://dx.doi.org/10.1109/mwscas48704.2020.9184436.
Повний текст джерелаBouguezzi, Safa, Hassene Faiedh, and Chokri Souani. "Hardware Implementation of Fixed-Point Convolutional Neural Network For Classification." In 2021 IEEE International Conference on Design & Test of Integrated Micro & Nano-Systems (DTS). IEEE, 2021. http://dx.doi.org/10.1109/dts52014.2021.9498072.
Повний текст джерелаZhuang, Yuan, Zhenguang Liu, Peng Qian, Qi Liu, Xiang Wang, and Qinming He. "Smart Contract Vulnerability Detection using Graph Neural Network." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/454.
Повний текст джерелаKhatua, Kaushik, Hillol Maity, Santanu Chattopadhyay, Indranil Sengupta, Girish Patankar, and Parthajit Bhattacharya. "A Deep Neural Network Augmented Approach for Fixed Polarity AND-XOR Network Synthesis." In TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON). IEEE, 2019. http://dx.doi.org/10.1109/tencon.2019.8929289.
Повний текст джерелаWawrzynski, Pawel. "Fixed point method of step-size estimation for on-line neural network training." In 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. http://dx.doi.org/10.1109/ijcnn.2010.5596596.
Повний текст джерелаЗвіти організацій з теми "Fixed neural network"
Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang, and Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, December 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.
Повний текст джерела