Gotowa bibliografia na temat „Neural Network Pruning”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Neural Network Pruning”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Neural Network Pruning"
JORGENSEN, THOMAS D., BARRY P. HAYNES i CHARLOTTE C. F. NORLUND. "PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES". International Journal of Neural Systems 18, nr 05 (październik 2008): 389–403. http://dx.doi.org/10.1142/s012906570800166x.
Pełny tekst źródłaGanguli, Tushar, i Edwin K. P. Chong. "Activation-Based Pruning of Neural Networks". Algorithms 17, nr 1 (21.01.2024): 48. http://dx.doi.org/10.3390/a17010048.
Pełny tekst źródłaKoene, Randal A., i Yoshio Takane. "Discriminant Component Pruning: Regularization and Interpretation of Multilayered Backpropagation Networks". Neural Computation 11, nr 3 (1.04.1999): 783–802. http://dx.doi.org/10.1162/089976699300016665.
Pełny tekst źródłaLing, Xing. "Summary of Deep Neural Network Pruning Algorithms". Applied and Computational Engineering 8, nr 1 (1.08.2023): 352–61. http://dx.doi.org/10.54254/2755-2721/8/20230182.
Pełny tekst źródłaGong, Ziyi, Huifu Zhang, Hao Yang, Fangjun Liu i Fan Luo. "A Review of Neural Network Lightweighting Techniques". Innovation & Technology Advances 1, nr 2 (16.01.2024): 1–16. http://dx.doi.org/10.61187/ita.v1i2.36.
Pełny tekst źródłaGuo, Changyi, i Ping Li. "Hybrid Pruning Method Based on Convolutional Neural Network Sensitivity and Statistical Threshold". Journal of Physics: Conference Series 2171, nr 1 (1.01.2022): 012055. http://dx.doi.org/10.1088/1742-6596/2171/1/012055.
Pełny tekst źródłaZou, Yunhuan. "Research On Pruning Methods for Mobilenet Convolutional Neural Network". Highlights in Science, Engineering and Technology 81 (26.01.2024): 232–36. http://dx.doi.org/10.54097/a742e326.
Pełny tekst źródłaLiang, Ling, Lei Deng, Yueling Zeng, Xing Hu, Yu Ji, Xin Ma, Guoqi Li i Yuan Xie. "Crossbar-Aware Neural Network Pruning". IEEE Access 6 (2018): 58324–37. http://dx.doi.org/10.1109/access.2018.2874823.
Pełny tekst źródłaTsai, Feng-Sheng, Yi-Li Shih, Chin-Tzong Pang i Sheng-Yi Hsu. "Formulation of Pruning Maps with Rhythmic Neural Firing". Mathematics 7, nr 12 (17.12.2019): 1247. http://dx.doi.org/10.3390/math7121247.
Pełny tekst źródłaWang, Miao, Xu Yang, Yunchong Qian, Yunlin Lei, Jian Cai, Ziyi Huan, Xialv Lin i Hao Dong. "Adaptive Neural Network Structure Optimization Algorithm Based on Dynamic Nodes". Current Issues in Molecular Biology 44, nr 2 (7.02.2022): 817–32. http://dx.doi.org/10.3390/cimb44020056.
Pełny tekst źródłaRozprawy doktorskie na temat "Neural Network Pruning"
Scalco, Alberto <1993>. "Feature Selection Using Neural Network Pruning". Master's Degree Thesis, Università Ca' Foscari Venezia, 2019. http://hdl.handle.net/10579/14382.
Pełny tekst źródłaLabarge, Isaac E. "Neural Network Pruning for ECG Arrhythmia Classification". DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2136.
Pełny tekst źródłaBrantley, Kiante. "BCAP| An Artificial Neural Network Pruning Technique to Reduce Overfitting". Thesis, University of Maryland, Baltimore County, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10140605.
Pełny tekst źródłaDetermining the optimal size of a neural network is complicated. Neural networks, with many free parameters, can be used to solve very complex problems. However, these neural networks are susceptible to overfitting. BCAP (Brantley-Clark Artificial Neural Network Pruning Technique) addresses overfitting by combining duplicate neurons in a neural network hidden layer, thereby forcing the network to learn more distinct features. We compare hidden units using the cosine similarity, and combine those that are similar with each other within a threshold ϵ. By doing so the co-adaption of the neurons in the network is reduced because hidden units that are highly correlated (i.e. similar) are combined. In this paper we show evidence that BCAP is successful in reducing network size while maintaining accuracy, or improving accuracy of neural networks during and after training.
Hubens, Nathan. "Towards lighter and faster deep neural networks with parameter pruning". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS025.
Pełny tekst źródłaSince their resurgence in 2012, Deep Neural Networks have become ubiquitous in most disciplines of Artificial Intelligence, such as image recognition, speech processing, and Natural Language Processing. However, over the last few years, neural networks have grown exponentially deeper, involving more and more parameters. Nowadays, it is not unusual to encounter architectures involving several billions of parameters, while they mostly contained thousands less than ten years ago.This generalized increase in the number of parameters makes such large models compute-intensive and essentially energy inefficient. This makes deployed models costly to maintain but also their use in resource-constrained environments very challenging.For these reasons, much research has been conducted to provide techniques reducing the amount of storage and computing required by neural networks. Among those techniques, neural network pruning, consisting in creating sparsely connected models, has been recently at the forefront of research. However, although pruning is a prevalent compression technique, there is currently no standard way of implementing or evaluating novel pruning techniques, making the comparison with previous research challenging.Our first contribution thus concerns a novel description of pruning techniques, developed according to four axes, and allowing us to unequivocally and completely define currently existing pruning techniques. Those components are: the granularity, the context, the criteria, and the schedule. Defining the pruning problem according to those components allows us to subdivide the problem into four mostly independent subproblems and also to better determine potential research lines.Moreover, pruning methods are still in an early development stage, and primarily designed for the research community. Indeed, most pruning works are usually implemented in a self-contained and sophisticated way, making it troublesome for non-researchers to apply such techniques without having to learn all the intricacies of the field. To fill this gap, we proposed FasterAI toolbox, intended to be helpful to researchers, eager to create and experiment with different compression techniques, but also to newcomers, that desire to compress their neural network for concrete applications. In particular, the sparsification capabilities of FasterAI have been built according to the previously defined pruning components, allowing for a seamless mapping between research ideas and their implementation.We then propose four theoretical contributions, each one aiming at providing new insights and improving on state-of-the-art methods in each of the four identified description axes. Also, those contributions have been realized by using the previously developed toolbox, thus validating its scientific utility.Finally, to validate the applicative character of the pruning technique, we have selected a use case: the detection of facial manipulation, also called DeepFakes Detection. The goal is to demonstrate that the developed tool, as well as the different proposed scientific contributions, can be applicable to a complex and actual problem. This last contribution is accompanied by a proof-of-concept application, providing DeepFake detection capabilities in a web-based environment, thus allowing anyone to perform detection on an image or video of their choice.This Deep Learning era has emerged thanks to the considerable improvements in high-performance hardware and access to a large amount of data. However, since the decline of Moore's Law, experts are suggesting that we might observe a shift in how we conceptualize the hardware, by going from task-agnostic to domain-specialized computations, thus leading to a new era of collaboration between software, hardware, and machine learning communities. This new quest for more efficiency will thus undeniably go through neural network compression techniques, and particularly sparse computations
Santacroce, Michael. "Neural Classification of Malware-As-Video with Considerations for In-Hardware Inferencing". University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1554216974556897.
Pełny tekst źródłaDupont, Robin. "Deep Neural Network Compression for Visual Recognition". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS565.
Pełny tekst źródłaThanks to the miniaturisation of electronics, embedded devices have become ubiquitous since the 2010s, performing various tasks around us. As their usage expands, there's an increasing demand for efficient data processing and decision-making. Deep neural networks are apt tools for this, but they are often too large and intricate for embedded systems. Therefore, methods to compress these networks without affecting their performance are crucial. This PhD thesis introduces two methods focused on pruning to compress networks, maintaining accuracy. The thesis first details a budget-aware method for compressing large neural networks using weight reparametrisation and a budget loss, eliminating the need for fine-tuning. Traditional pruning methods often use post-training indicators to cut weights, ignoring desired pruning rates. Our method incorporates a budget loss, directing pruning during training, enabling simultaneous topology and weight optimisation. By soft-pruning smaller weights via reparametrisation, we reduce accuracy loss compared to standard pruning. We validate our method on several datasets and architectures. Later, the thesis examines extracting efficient subnetworks without weight training. We aim to discern the optimal subnetwork topology within a large network, bypassing weight optimisation yet ensuring strong performance. This is realized with our Arbitrarily Shifted Log Parametrisation, a differentiable method for discrete topology sampling, facilitating masks' training to denote weight selection probability. Additionally, a weight recalibration technique, Smart Rescale, is presented. It boosts extracted subnetworks' performance and hastens their training. Our method identifies the best pruning rate in a single training cycle, averting exhaustive hyperparameter searches and various rate training. Through extensive tests, our technique consistently surpasses similar state-of-the-art methods, creating streamlined networks that achieve high sparsity without notable accuracy drops
PRONO, LUCIANO. "Methods and Applications for Low-power Deep Neural Networks on Edge Devices". Doctoral thesis, Politecnico di Torino, 2023. https://hdl.handle.net/11583/2976593.
Pełny tekst źródłaZULLICH, MARCO. "Un'analisi delle Tecniche di Potatura in Reti Neurali Profonde: Studi Sperimentali ed Applicazioni". Doctoral thesis, Università degli Studi di Trieste, 2023. https://hdl.handle.net/11368/3041099.
Pełny tekst źródłaPruning, in the context of Machine Learning, denotes the act of removing parameters from parametric models, such as linear models, decision trees, and ANNs. Pruning can be motivated by several necessities, first and foremost the reduction in the size and the memory footprint of a model, possibly without hurting its accuracy. The interest of the scientific community to pruning applied to ANNs has increased substantially in the last decade due to the dramatic expansion in the size of these models. This can hinder the implementation of ANNs in lower-end computers, also posing a burden to democratization of Artificial Intelligence. Recent advances in pruning techniques have empirically shown to effectively remove a large portion of parameters (even over 99%) with none to minimal loss in accuracy. Despite this, open questions on the matter still remain, especially regarding the inner dynamics of pruning concerning, e.g., the way features learned by the pruned ANNs relate to their dense versions, or the ability of pruned ANNs to generalize to data or environments unseen during training. In addition, pruning is often computationally-expensive and poses notable issues concerning high energy consumption and pollution. We hereby present some approaches for tackling the aforementioned issues: comparing representations/features learned by pruned ANNs, improvement in time-efficiency of pruning, application to pruning to simulated robots, with an eye on generalization. Finally, we showcase the usage of pruning for deploying, on a low-end device with limited memory, a large object detection model for face mask detection, envisioning an application of the model to videosurveillance.
Yvinec, Edouard. "Efficient Neural Networks : Post Training Pruning and Quantization". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS581.
Pełny tekst źródłaDeep neural networks have grown to be the most widely adopted models to solve most computer vision and natural language processing tasks. Since the renewed interest, sparked in 2012, for these architectures, in machine learning, their size in terms of memory footprint and computational costs have increased tremendously, which has hindered their deployment. In particular, with the rising interest for generative ai such as large language models and diffusion models, this phenomenon has recently reached new heights, as these models can weight several billions of parameters and require multiple high-end gpus in order to infer in real-time. In response, the deep learning community has researched for methods to compress and accelerate these models. These methods are: efficient architecture design, tensor decomposition, pruning and quantization. In this manuscript, I paint a landscape of the current state-of-the art in deep neural networks compression and acceleration as well as my contributions to the field. First, I propose a general introduction to the aforementioned techniques and highlight their shortcomings and current challenges. Second, I provide a detailed discussion regarding my contributions to the field of deep neural networks pruning. These contributions led to the publication of three articles: RED, RED++ and SInGE. In RED and RED++, I introduced a novel way to perform data-free pruning and tensor decomposition based on redundancy reduction. On the flip side, in SInGE, I proposed a new importance-based criterion for data-driven pruning. This criterion was inspired by attribution techniques which consist in ranking inputs by their relative importance with respect to the final prediction. In SInGE, I adapted one of the most effective attribution technique to weight importance ranking for pruning. In the third chapter, I layout my contributions to the field of deep quantization: SPIQ, PowerQuant, REx, NUPES, and a best practice paper. Each of these methods address one of the previous limitations of post-training quantization. In SPIQ, PowerQuant and REx, I provide a solution to the granularity limitations of quantization, a novel non-uniform format which is particularly effective on transformer architectures and a technique for quantization decomposition which eliminates the need for unsupported bit-widths, respectively. In the two remaining articles, I provide significant improvements over existing gradient-based post-training quantization techniques, bridging the gap between such techniques and non-uniform quantization. In the last chapter, I propose a set of leads for future work which I believe to be the, current, most important unanswered questions in the field
Brigandì, Camilla. "Utilizzo della omologia persistente nelle reti neurali". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2022.
Znajdź pełny tekst źródłaKsiążki na temat "Neural Network Pruning"
Hong, X. A Givens rotation based fast backward elimination algorithm for RBF neural network pruning. Sheffield: University of Sheffield, Dept. of Automatic Control and Systems Engineering, 1996.
Znajdź pełny tekst źródłaC, Jorgensen Charles, i Ames Research Center, red. Toward a more robust pruning procedure for MLP networks. Moffett Field, Calif: National Aeronautics and Space Administration, Ames Research Center, 1998.
Znajdź pełny tekst źródłaMultiple Comparison Pruning of Neural Networks. Storming Media, 1999.
Znajdź pełny tekst źródłaCzęści książek na temat "Neural Network Pruning"
Chen, Jinting, Zhaocheng Zhu, Cheng Li i Yuming Zhao. "Self-Adaptive Network Pruning". W Neural Information Processing, 175–86. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36708-4_15.
Pełny tekst źródłaGridin, Ivan. "Model Pruning". W Automated Deep Learning Using Neural Network Intelligence, 319–55. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8149-9_6.
Pełny tekst źródłaPei, Songwen, Jie Luo i Sheng Liang. "DRP:Discrete Rank Pruning for Neural Network". W Lecture Notes in Computer Science, 168–79. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-21395-3_16.
Pełny tekst źródłaWidmann, Thomas, Florian Merkle, Martin Nocker i Pascal Schöttle. "Pruning for Power: Optimizing Energy Efficiency in IoT with Neural Network Pruning". W Engineering Applications of Neural Networks, 251–63. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-34204-2_22.
Pełny tekst źródłaGong, Saijun, Lin Chen i Zhicheng Dong. "Neural Network Pruning via Genetic Wavelet Channel Search". W Neural Information Processing, 348–58. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92270-2_30.
Pełny tekst źródłaLi, Wenrui, i Jo Plested. "Pruning Convolutional Neural Network with Distinctiveness Approach". W Communications in Computer and Information Science, 448–55. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36802-9_48.
Pełny tekst źródłaWu, Jia-Liang, Haopu Shang, Wenjing Hong i Chao Qian. "Robust Neural Network Pruning by Cooperative Coevolution". W Lecture Notes in Computer Science, 459–73. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-14714-2_32.
Pełny tekst źródłaYang, Yang, i Baoliang Lu. "Structure Pruning Strategies for Min-Max Modular Network". W Advances in Neural Networks — ISNN 2005, 646–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11427391_103.
Pełny tekst źródłaZhao, Feifei, Tielin Zhang, Yi Zeng i Bo Xu. "Towards a Brain-Inspired Developmental Neural Network by Adaptive Synaptic Pruning". W Neural Information Processing, 182–91. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70093-9_19.
Pełny tekst źródłaPei, Songwen, Yusheng Wu i Meikang Qiu. "Neural Network Compression and Acceleration by Federated Pruning". W Algorithms and Architectures for Parallel Processing, 173–83. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60239-0_12.
Pełny tekst źródłaStreszczenia konferencji na temat "Neural Network Pruning"
Shang, Haopu, Jia-Liang Wu, Wenjing Hong i Chao Qian. "Neural Network Pruning by Cooperative Coevolution". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/667.
Pełny tekst źródłaWang, Huan, Can Qin, Yue Bai, Yulun Zhang i Yun Fu. "Recent Advances on Neural Network Pruning at Initialization". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/786.
Pełny tekst źródłaZhao, Chenglong, Bingbing Ni, Jian Zhang, Qiwei Zhao, Wenjun Zhang i Qi Tian. "Variational Convolutional Neural Network Pruning". W 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.00289.
Pełny tekst źródłaCai, Xingyu, Jinfeng Yi, Fan Zhang i Sanguthevar Rajasekaran. "Adversarial Structured Neural Network Pruning". W CIKM '19: The 28th ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3357384.3358150.
Pełny tekst źródłaLin, Chih-Chia, Chia-Yin Liu, Chih-Hsuan Yen, Tei-Wei Kuo i Pi-Cheng Hsiu. "Intermittent-Aware Neural Network Pruning". W 2023 60th ACM/IEEE Design Automation Conference (DAC). IEEE, 2023. http://dx.doi.org/10.1109/dac56929.2023.10247825.
Pełny tekst źródłaShahhosseini, Sina, Ahmad Albaqsami, Masoomeh Jasemi i Nader Bagherzadeh. "Partition Pruning: Parallelization-Aware Pruning for Dense Neural Networks". W 2020 28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP). IEEE, 2020. http://dx.doi.org/10.1109/pdp50117.2020.00053.
Pełny tekst źródłaJeong, Taehee, Ehsam Ghasemi, Jorn Tuyls, Elliott Delaye i Ashish Sirasao. "Neural network pruning and hardware acceleration". W 2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC). IEEE, 2020. http://dx.doi.org/10.1109/ucc48980.2020.00069.
Pełny tekst źródłaXu, Sheng, Anran Huang, Lei Chen i Baochang Zhang. "Convolutional Neural Network Pruning: A Survey". W 2020 39th Chinese Control Conference (CCC). IEEE, 2020. http://dx.doi.org/10.23919/ccc50068.2020.9189610.
Pełny tekst źródłaMolchanov, Pavlo, Arun Mallya, Stephen Tyree, Iuri Frosio i Jan Kautz. "Importance Estimation for Neural Network Pruning". W 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.01152.
Pełny tekst źródłaSetiono, R., i A. Gaweda. "Neural network pruning for function approximation". W Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium. IEEE, 2000. http://dx.doi.org/10.1109/ijcnn.2000.859435.
Pełny tekst źródłaRaporty organizacyjne na temat "Neural Network Pruning"
Guan, Hui, Xipeng Shen, Seung-Hwan Lim i Robert M. Patton. Composability-Centered Convolutional Neural Network Pruning. Office of Scientific and Technical Information (OSTI), luty 2018. http://dx.doi.org/10.2172/1427608.
Pełny tekst źródła