Artykuły w czasopismach na temat „Sparse Accelerator”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Sparse Accelerator”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Xie, Xiaoru, Mingyu Zhu, Siyuan Lu i Zhongfeng Wang. "Efficient Layer-Wise N:M Sparse CNN Accelerator with Flexible SPEC: Sparse Processing Element Clusters". Micromachines 14, nr 3 (24.02.2023): 528. http://dx.doi.org/10.3390/mi14030528.
Pełny tekst źródłaLi, Yihang. "Sparse-Aware Deep Learning Accelerator". Highlights in Science, Engineering and Technology 39 (1.04.2023): 305–10. http://dx.doi.org/10.54097/hset.v39i.6544.
Pełny tekst źródłaXu, Jia, Han Pu i Dong Wang. "Sparse Convolution FPGA Accelerator Based on Multi-Bank Hash Selection". Micromachines 16, nr 1 (27.12.2024): 22. https://doi.org/10.3390/mi16010022.
Pełny tekst źródłaZheng, Yong, Haigang Yang, Yiping Jia i Zhihong Huang. "PermLSTM: A High Energy-Efficiency LSTM Accelerator Architecture". Electronics 10, nr 8 (8.04.2021): 882. http://dx.doi.org/10.3390/electronics10080882.
Pełny tekst źródłaYavits, Leonid, i Ran Ginosar. "Accelerator for Sparse Machine Learning". IEEE Computer Architecture Letters 17, nr 1 (1.01.2018): 21–24. http://dx.doi.org/10.1109/lca.2017.2714667.
Pełny tekst źródłaTeodorovic, Predrag, i Rastislav Struharik. "Hardware Acceleration of Sparse Oblique Decision Trees for Edge Computing". Elektronika ir Elektrotechnika 25, nr 5 (6.10.2019): 18–24. http://dx.doi.org/10.5755/j01.eie.25.5.24351.
Pełny tekst źródłaVranjkovic, Vuk, Predrag Teodorovic i Rastislav Struharik. "Universal Reconfigurable Hardware Accelerator for Sparse Machine Learning Predictive Models". Electronics 11, nr 8 (8.04.2022): 1178. http://dx.doi.org/10.3390/electronics11081178.
Pełny tekst źródłaGowda, Kavitha Malali Vishveshwarappa, Sowmya Madhavan, Stefano Rinaldi, Parameshachari Bidare Divakarachari i Anitha Atmakur. "FPGA-Based Reconfigurable Convolutional Neural Network Accelerator Using Sparse and Convolutional Optimization". Electronics 11, nr 10 (22.05.2022): 1653. http://dx.doi.org/10.3390/electronics11101653.
Pełny tekst źródłaDey, Sumon, Lee Baker, Joshua Schabel, Weifu Li i Paul D. Franzon. "A Scalable Cluster-based Hierarchical Hardware Accelerator for a Cortically Inspired Algorithm". ACM Journal on Emerging Technologies in Computing Systems 17, nr 4 (30.06.2021): 1–29. http://dx.doi.org/10.1145/3447777.
Pełny tekst źródłaLiu, Sheng, Yasong Cao i Shuwei Sun. "Mapping and Optimization Method of SpMV on Multi-DSP Accelerator". Electronics 11, nr 22 (11.11.2022): 3699. http://dx.doi.org/10.3390/electronics11223699.
Pełny tekst źródłaVranjkovic, Vuk, i Rastislav Struharik. "Hardware Acceleration of Sparse Support Vector Machines for Edge Computing". Elektronika ir Elektrotechnika 26, nr 3 (27.06.2020): 42–53. http://dx.doi.org/10.5755/j01.eie.26.3.25796.
Pełny tekst źródłaLiu, Peng, i Yu Wang. "A Low-Power General Matrix Multiplication Accelerator with Sparse Weight-and-Output Stationary Dataflow". Micromachines 16, nr 1 (16.01.2025): 101. https://doi.org/10.3390/mi16010101.
Pełny tekst źródłaWang, Deguang, Junzhong Shen, Mei Wen i Chunyuan Zhang. "Efficient Implementation of 2D and 3D Sparse Deconvolutional Neural Networks with a Uniform Architecture on FPGAs". Electronics 8, nr 7 (18.07.2019): 803. http://dx.doi.org/10.3390/electronics8070803.
Pełny tekst źródłaHe, Pengzhou, Yazheng Tu, Tianyou Bao, Çetin Çetin Koç i Jiafeng Xie. "HSPA: High-Throughput Sparse Polynomial Multiplication for Code-based Post-Quantum Cryptography". ACM Transactions on Embedded Computing Systems 24, nr 1 (10.12.2024): 1–24. https://doi.org/10.1145/3703837.
Pełny tekst źródłaXIAO, Hao, Kaikai ZHAO i Guangzhu LIU. "Efficient Hardware Accelerator for Compressed Sparse Deep Neural Network". IEICE Transactions on Information and Systems E104.D, nr 5 (1.05.2021): 772–75. http://dx.doi.org/10.1587/transinf.2020edl8153.
Pełny tekst źródłaLi, Jiajun, Shuhao Jiang, Shijun Gong, Jingya Wu, Junchao Yan, Guihai Yan i Xiaowei Li. "SqueezeFlow: A Sparse CNN Accelerator Exploiting Concise Convolution Rules". IEEE Transactions on Computers 68, nr 11 (1.11.2019): 1663–77. http://dx.doi.org/10.1109/tc.2019.2924215.
Pełny tekst źródłaLi, Fanrong, Gang Li, Zitao Mo, Xiangyu He i Jian Cheng. "FSA: A Fine-Grained Systolic Accelerator for Sparse CNNs". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 39, nr 11 (listopad 2020): 3589–600. http://dx.doi.org/10.1109/tcad.2020.3012212.
Pełny tekst źródłaYang, Tao, Zhezhi He, Tengchuan Kou, Qingzheng Li, Qi Han, Haibao Yu, Fangxin Liu, Yun Liang i Li Jiang. "BISWSRBS: A Winograd-based CNN Accelerator with a Fine-grained Regular Sparsity Pattern and Mixed Precision Quantization". ACM Transactions on Reconfigurable Technology and Systems 14, nr 4 (31.12.2021): 1–28. http://dx.doi.org/10.1145/3467476.
Pełny tekst źródłaWu, Di, Xitian Fan, Wei Cao i Lingli Wang. "SWM: A High-Performance Sparse-Winograd Matrix Multiplication CNN Accelerator". IEEE Transactions on Very Large Scale Integration (VLSI) Systems 29, nr 5 (maj 2021): 936–49. http://dx.doi.org/10.1109/tvlsi.2021.3060041.
Pełny tekst źródłaLiu, Qingliang, Jinmei Lai i Jiabao Gao. "An Efficient Channel-Aware Sparse Binarized Neural Networks Inference Accelerator". IEEE Transactions on Circuits and Systems II: Express Briefs 69, nr 3 (marzec 2022): 1637–41. http://dx.doi.org/10.1109/tcsii.2021.3119369.
Pełny tekst źródłaSun, Yichun, Hengzhu Liu i Tong Zhou. "Sparse Cholesky Factorization on FPGA Using Parameterized Model". Mathematical Problems in Engineering 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/3021591.
Pełny tekst źródłaWang, Renping, Shun Li, Enhao Tang, Sen Lan, Yajing Liu, Jing Yang, Shizhen Huang i Hailong Hu. "SH-GAT: Software-hardware co-design for accelerating graph attention networks on FPGA". Electronic Research Archive 32, nr 4 (2024): 2310–22. http://dx.doi.org/10.3934/era.2024105.
Pełny tekst źródłaXie, Xiaoru, Jun Lin, Zhongfeng Wang i Jinghe Wei. "An Efficient and Flexible Accelerator Design for Sparse Convolutional Neural Networks". IEEE Transactions on Circuits and Systems I: Regular Papers 68, nr 7 (lipiec 2021): 2936–49. http://dx.doi.org/10.1109/tcsi.2021.3074300.
Pełny tekst źródłaLai, Bo-Cheng, Jyun-Wei Pan i Chien-Yu Lin. "Enhancing Utilization of SIMD-Like Accelerator for Sparse Convolutional Neural Networks". IEEE Transactions on Very Large Scale Integration (VLSI) Systems 27, nr 5 (maj 2019): 1218–22. http://dx.doi.org/10.1109/tvlsi.2019.2897052.
Pełny tekst źródłaLu, Yuntao, Chao Wang, Lei Gong i Xuehai Zhou. "SparseNN: A Performance-Efficient Accelerator for Large-Scale Sparse Neural Networks". International Journal of Parallel Programming 46, nr 4 (3.10.2017): 648–59. http://dx.doi.org/10.1007/s10766-017-0528-8.
Pełny tekst źródłaMelham, R. "A systolic accelerator for the iterative solution of sparse linear systems". IEEE Transactions on Computers 38, nr 11 (1989): 1591–95. http://dx.doi.org/10.1109/12.42132.
Pełny tekst źródłaLi, Tao, i Li Shen. "A sparse matrix vector multiplication accelerator based on high-bandwidth memory". Computers and Electrical Engineering 105 (styczeń 2023): 108488. http://dx.doi.org/10.1016/j.compeleceng.2022.108488.
Pełny tekst źródłaZhu, Chaoyang, Kejie Huang, Shuyuan Yang, Ziqi Zhu, Hejia Zhang i Haibin Shen. "An Efficient Hardware Accelerator for Structured Sparse Convolutional Neural Networks on FPGAs". IEEE Transactions on Very Large Scale Integration (VLSI) Systems 28, nr 9 (wrzesień 2020): 1953–65. http://dx.doi.org/10.1109/tvlsi.2020.3002779.
Pełny tekst źródłaWang, Zixiao, Ke Xu, Shuaixiao Wu, Li Liu, Lingzhi Liu i Dong Wang. "Sparse-YOLO: Hardware/Software Co-Design of an FPGA Accelerator for YOLOv2". IEEE Access 8 (2020): 116569–85. http://dx.doi.org/10.1109/access.2020.3004198.
Pełny tekst źródłaHumble, Ryan, William Colocho, Finn O’Shea, Daniel Ratner i Eric Darve. "Resilient VAE: Unsupervised Anomaly Detection at the SLAC Linac Coherent Light Source". EPJ Web of Conferences 295 (2024): 09033. http://dx.doi.org/10.1051/epjconf/202429509033.
Pełny tekst źródłaLiang, Zhongwei, Xiaochu Liu, Guilin Wen i Jinrui Xiao. "Effectiveness prediction of abrasive jetting stream of accelerator tank using normalized sparse autoencoder-adaptive neural fuzzy inference system". Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 234, nr 13 (26.06.2020): 1615–39. http://dx.doi.org/10.1177/0954405420927582.
Pełny tekst źródłaShimoda, Masayuki, Youki Sada i Hiroki Nakahara. "FPGA-Based Inter-layer Pipelined Accelerators for Filter-Wise Weight-Balanced Sparse Fully Convolutional Networks with Overlapped Tiling". Journal of Signal Processing Systems 93, nr 5 (13.02.2021): 499–512. http://dx.doi.org/10.1007/s11265-021-01642-6.
Pełny tekst źródłaWang, Miao, Xiaoya Fan, Wei Zhang, Ting Zhu, Tengteng Yao, Hui Ding i Danghui Wang. "Balancing memory-accessing and computing over sparse DNN accelerator via efficient data packaging". Journal of Systems Architecture 117 (sierpień 2021): 102094. http://dx.doi.org/10.1016/j.sysarc.2021.102094.
Pełny tekst źródłaZhao, Yunping, Jianzhuang Lu i Xiaowen Chen. "A Dynamically Reconfigurable Accelerator Design Using a Sparse-Winograd Decomposition Algorithm for CNNs". Computers, Materials & Continua 66, nr 1 (2020): 517–35. http://dx.doi.org/10.32604/cmc.2020.012380.
Pełny tekst źródłaLiu, Zhi-Gang, Paul N. Whatmough i Matthew Mattina. "Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference". IEEE Computer Architecture Letters 19, nr 1 (1.01.2020): 34–37. http://dx.doi.org/10.1109/lca.2020.2979965.
Pełny tekst źródłaPham, Duc-An, i Bo-Cheng Lai. "Dataflow and microarchitecture co-optimisation for sparse CNN on distributed processing element accelerator". IET Circuits, Devices & Systems 14, nr 8 (1.11.2020): 1185–94. http://dx.doi.org/10.1049/iet-cds.2019.0225.
Pełny tekst źródłaZhang, Min, Linpeng Li, Hai Wang, Yan Liu, Hongbo Qin i Wei Zhao. "Optimized Compression for Implementing Convolutional Neural Networks on FPGA". Electronics 8, nr 3 (6.03.2019): 295. http://dx.doi.org/10.3390/electronics8030295.
Pełny tekst źródłaLiu, Chester, Sung-Gun Cho i Zhengya Zhang. "A 2.56-mm2 718GOPS Configurable Spiking Convolutional Sparse Coding Accelerator in 40-nm CMOS". IEEE Journal of Solid-State Circuits 53, nr 10 (październik 2018): 2818–27. http://dx.doi.org/10.1109/jssc.2018.2865457.
Pełny tekst źródłaAimar, Alessandro, Hesham Mostafa, Enrico Calabrese, Antonio Rios-Navarro, Ricardo Tapiador-Morales, Iulia-Alexandra Lungu, Moritz B. Milde i in. "NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps". IEEE Transactions on Neural Networks and Learning Systems 30, nr 3 (marzec 2019): 644–56. http://dx.doi.org/10.1109/tnnls.2018.2852335.
Pełny tekst źródłaQian, Cheng, Bruce Childers, Libo Huang, Hui Guo i Zhiying Wang. "CGAcc: A Compressed Sparse Row Representation-Based BFS Graph Traversal Accelerator on Hybrid Memory Cube". Electronics 7, nr 11 (7.11.2018): 307. http://dx.doi.org/10.3390/electronics7110307.
Pełny tekst źródłaBian, Haoqiong, Tiannan Sha i Anastasia Ailamaki. "Using Cloud Functions as Accelerator for Elastic Data Analytics". Proceedings of the ACM on Management of Data 1, nr 2 (13.06.2023): 1–27. http://dx.doi.org/10.1145/3589306.
Pełny tekst źródłaChen, Xi, Chang Gao, Zuowen Wang, Longbiao Cheng, Sheng Zhou, Shih-Chii Liu i Tobi Delbruck. "Exploiting Symmetric Temporally Sparse BPTT for Efficient RNN Training". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 10 (24.03.2024): 11399–406. http://dx.doi.org/10.1609/aaai.v38i10.29020.
Pełny tekst źródłaWeng, Yui-Kai, Shih-Hsu Huang i Hsu-Yu Kao. "Block-Based Compression and Corresponding Hardware Circuits for Sparse Activations". Sensors 21, nr 22 (10.11.2021): 7468. http://dx.doi.org/10.3390/s21227468.
Pełny tekst źródłaXu, Shiyao, Jingfei Jiang, jinwei Xu i Xifu Qian. "Efficient SpMM Accelerator for Deep Learning: Sparkle and Its Automated Generator". ACM Transactions on Reconfigurable Technology and Systems, 7.06.2024. http://dx.doi.org/10.1145/3665896.
Pełny tekst źródłaHwang, Soojin, Daehyeon Baek, Jongse Park i Jaehyuk Huh. "Cerberus: Triple Mode Acceleration of Sparse Matrix and Vector Multiplication". ACM Transactions on Architecture and Code Optimization, 17.03.2024. http://dx.doi.org/10.1145/3653020.
Pełny tekst źródłaXie, Kunpeng, Ye Lu, Xinyu He, Dezhi Yi, Huijuan Dong i Yao Chen. "Winols: A Large-Tiling Sparse Winograd CNN Accelerator on FPGAs". ACM Transactions on Architecture and Code Optimization, 31.01.2024. http://dx.doi.org/10.1145/3643682.
Pełny tekst źródłaWang, Bo, Sheng Ma, Shengbai Luo, Lizhou Wu, Jianmin Zhang, Chunyuan Zhang i Tiejun Li. "SparGD: A Sparse GEMM Accelerator with Dynamic Dataflow". ACM Transactions on Design Automation of Electronic Systems, 27.11.2023. http://dx.doi.org/10.1145/3634703.
Pełny tekst źródłaSoltaniyeh, Mohammadreza, Richard P. Martin i Santosh Nagarakatte. "An Accelerator for Sparse Convolutional Neural Networks Leveraging Systolic General Matrix-Matrix Multiplication". ACM Transactions on Architecture and Code Optimization, 25.04.2022. http://dx.doi.org/10.1145/3532863.
Pełny tekst źródłaSoltaniyeh, Mohammadreza, Richard P. Martin i Santosh Nagarakatte. "An Accelerator for Sparse Convolutional Neural Networks Leveraging Systolic General Matrix-Matrix Multiplication". ACM Transactions on Architecture and Code Optimization, 25.04.2022. http://dx.doi.org/10.1145/3532863.
Pełny tekst źródłaDel Sarto, Nicola, Diane A. Isabelle, Valentina Cucino i Alberto Di Minin. "Engaging with startups through corporate accelerators: the case of H‐FARM's White Label Accelerator". R&D Management, 9.07.2024. http://dx.doi.org/10.1111/radm.12705.
Pełny tekst źródła