Gotowa bibliografia na temat „Neural network accelerator”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Neural network accelerator”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Neural network accelerator"
Eliahu, Adi, Ronny Ronen, Pierre-Emmanuel Gaillardon i Shahar Kvatinsky. "multiPULPly". ACM Journal on Emerging Technologies in Computing Systems 17, nr 2 (kwiecień 2021): 1–27. http://dx.doi.org/10.1145/3432815.
Pełny tekst źródłaCho, Jaechan, Yongchul Jung, Seongjoo Lee i Yunho Jung. "Reconfigurable Binary Neural Network Accelerator with Adaptive Parallelism Scheme". Electronics 10, nr 3 (20.01.2021): 230. http://dx.doi.org/10.3390/electronics10030230.
Pełny tekst źródłaNoskova, E. S., I. E. Zakharov, Y. N. Shkandybin i S. G. Rykovanov. "Towards energy-efficient neural network calculations". Computer Optics 46, nr 1 (luty 2022): 160–66. http://dx.doi.org/10.18287/2412-6179-co-914.
Pełny tekst źródłaHong, JiUn, Saad Arslan, TaeGeon Lee i HyungWon Kim. "Design of Power-Efficient Training Accelerator for Convolution Neural Networks". Electronics 10, nr 7 (26.03.2021): 787. http://dx.doi.org/10.3390/electronics10070787.
Pełny tekst źródłaFerianc, Martin, Hongxiang Fan, Divyansh Manocha, Hongyu Zhou, Shuanglong Liu, Xinyu Niu i Wayne Luk. "Improving Performance Estimation for Design Space Exploration for Convolutional Neural Network Accelerators". Electronics 10, nr 4 (23.02.2021): 520. http://dx.doi.org/10.3390/electronics10040520.
Pełny tekst źródłaSunny, Febin P., Asif Mirza, Mahdi Nikdast i Sudeep Pasricha. "ROBIN: A Robust Optical Binary Neural Network Accelerator". ACM Transactions on Embedded Computing Systems 20, nr 5s (31.10.2021): 1–24. http://dx.doi.org/10.1145/3476988.
Pełny tekst źródłaAnmin, Kong, i Zhao Bin. "A Parallel Loading Based Accelerator for Convolution Neural Network". International Journal of Machine Learning and Computing 10, nr 5 (5.10.2020): 669–74. http://dx.doi.org/10.18178/ijmlc.2020.10.5.989.
Pełny tekst źródłaXia, Chengpeng, Yawen Chen, Haibo Zhang, Hao Zhang, Fei Dai i Jigang Wu. "Efficient neural network accelerators with optical computing and communication". Computer Science and Information Systems, nr 00 (2022): 66. http://dx.doi.org/10.2298/csis220131066x.
Pełny tekst źródłaTang, Wenkai, i Peiyong Zhang. "GPGCN: A General-Purpose Graph Convolution Neural Network Accelerator Based on RISC-V ISA Extension". Electronics 11, nr 22 (21.11.2022): 3833. http://dx.doi.org/10.3390/electronics11223833.
Pełny tekst źródłaAn, Fubang, Lingli Wang i Xuegong Zhou. "A High Performance Reconfigurable Hardware Architecture for Lightweight Convolutional Neural Network". Electronics 12, nr 13 (27.06.2023): 2847. http://dx.doi.org/10.3390/electronics12132847.
Pełny tekst źródłaRozprawy doktorskie na temat "Neural network accelerator"
Tianxu, Yue. "Convolutional Neural Network FPGA-accelerator on Intel DE10-Standard FPGA". Thesis, Linköpings universitet, Elektroniska Kretsar och System, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178174.
Pełny tekst źródłaOudrhiri, Ali. "Performance of a Neural Network Accelerator Architecture and its Optimization Using a Pipeline-Based Approach". Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS658.pdf.
Pełny tekst źródłaIn recent years, neural networks have gained widespread popularity for their versatility and effectiveness in solving a wide range of complex tasks. Their ability to learn and make predictions from large data-sets has revolutionized various fields. However, as neural networks continue to find applications in an ever-expanding array of domains, their significant computational requirements become a pressing challenge. This computational demand is particularly problematic when deploying neural networks in resource-constrained embedded devices, especially within the context of edge computing for inference tasks. Nowadays, neural network accelerator chips emerge as the optimal choice for supporting neural networks at the edge. These chips offer remarkable efficiency with their compact size, low power consumption, and reduced latency. Moreover, the fact that they are integrated on the same chip environment also enhances security by minimizing external data communication. In the frame of edge computing, diverse requirements have emerged, necessitating trade-offs in various performance aspects. This has led to the development of accelerator architectures that are highly configurable, allowing them to adapt to distinct performance demands. In this context, the focus lies on Gemini, a configurable inference neural network accelerator designed with imposed architecture and implemented using High-Level Synthesis techniques. The considerations for its design and implementation were driven by the need for parallelization configurability and performance optimization. Once this accelerator was designed, demonstrating the power of its configurability became essential, helping users select the most suitable architecture for their neural networks. To achieve this objective, this thesis contributed to the development of a performance prediction strategy operating at a high-level of abstraction, which considers the chosen architecture and neural network configuration. This tool assists clients in making decisions regarding the appropriate architecture for their specific neural network applications. During the research, we noticed that using one accelerator presents several limits and that increasing parallelism had limitations on performances. Consequently, we adopted a new strategy for optimizing neural network acceleration. This time, we took a high-level approach that did not require fine-grained accelerator optimizations. We organized multiple Gemini instances into a pipeline and allocated layers to different accelerators to maximize performance. We proposed solutions for two scenarios: a user scenario where the pipeline structure is predefined with a fixed number of accelerators, accelerator configurations, and RAM sizes. We proposed solutions to map the layers on the different accelerators to optimise the execution performance. We did the same for a designer scenario, where the pipeline structure is not fixed, this time it is allowed to choose the number and configuration of the accelerators to optimize the execution and also hardware performances. This pipeline strategy has proven to be effective for the Gemini accelerator. Although this thesis originated from a specific industrial need, certain solutions developed during the research can be applied or adapted to other neural network accelerators. Notably, the performance prediction strategy and high-level optimization of NN processing through pipelining multiple instances offer valuable insights for broader application
Maltoni, Pietro. "Progetto di un acceleratore hardware per layer di convoluzioni depthwise in applicazioni di Deep Neural Network". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24205/.
Pełny tekst źródłaXu, Hongjie. "Energy-Efficient On-Chip Cache Architectures and Deep Neural Network Accelerators Considering the Cost of Data Movement". Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263786.
Pełny tekst źródła京都大学
新制・課程博士
博士(情報学)
甲第23325号
情博第761号
京都大学大学院情報学研究科通信情報システム専攻
(主査)教授 小野寺 秀俊, 教授 大木 英司, 教授 佐藤 高史
学位規則第4条第1項該当
Doctor of Informatics
Kyoto University
DFAM
Riera, Villanueva Marc. "Low-power accelerators for cognitive computing". Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/669828.
Pełny tekst źródłaLes xarxes neuronals profundes (DNN) han aconseguit un èxit enorme en aplicacions cognitives, i són especialment eficients en problemes de classificació i presa de decisions com ara reconeixement de veu o traducció automàtica. Els dispositius mòbils depenen cada cop més de les DNNs per entendre el món. Els telèfons i rellotges intel·ligents, o fins i tot els cotxes, realitzen diàriament tasques discriminatòries com ara el reconeixement de rostres o objectes. Malgrat la popularitat creixent de les DNNs, el seu funcionament en sistemes mòbils presenta diversos reptes: proporcionar una alta precisió i rendiment amb un petit pressupost de memòria i energia. Les DNNs modernes consisteixen en milions de paràmetres que requereixen recursos computacionals i de memòria enormes i, per tant, no es poden utilitzar directament en sistemes de baixa potència amb recursos limitats. L'objectiu d'aquesta tesi és abordar aquests problemes i proposar noves solucions per tal de dissenyar acceleradors eficients per a sistemes de computació cognitiva basats en DNNs. En primer lloc, ens centrem en optimitzar la inferència de les DNNs per a aplicacions de processament de seqüències. Realitzem una anàlisi de la similitud de les entrades entre execucions consecutives de les DNNs. A continuació, proposem DISC, un accelerador que implementa una tècnica de càlcul diferencial, basat en l'alt grau de semblança de les entrades, per reutilitzar els càlculs de l'execució anterior, en lloc de computar tota la xarxa. Observem que, de mitjana, més del 60% de les entrades de qualsevol capa de les DNNs utilitzades presenten canvis menors respecte a l'execució anterior. Evitar els accessos de memòria i càlculs d'aquestes entrades comporta un estalvi d'energia del 63% de mitjana. En segon lloc, proposem optimitzar la inferència de les DNNs basades en capes FC. Primer analitzem el nombre de pesos únics per neurona d'entrada en diverses xarxes. Aprofitant optimitzacions comunes com la quantització lineal, observem un nombre molt reduït de pesos únics per entrada en diverses capes FC de DNNs modernes. A continuació, per millorar l'eficiència energètica del càlcul de les capes FC, presentem CREW, un accelerador que implementa un eficient mecanisme de reutilització de càlculs i emmagatzematge dels pesos. CREW redueix el nombre de multiplicacions i proporciona estalvis importants en l'ús de la memòria. Avaluem CREW en un conjunt divers de DNNs modernes. CREW proporciona, de mitjana, una millora en rendiment de 2,61x i un estalvi d'energia de 2,42x. En tercer lloc, proposem un mecanisme per optimitzar la inferència de les RNNs. Les cel·les de les xarxes recurrents realitzen multiplicacions element a element de les activacions de diferents comportes, sigmoides i tanh sent les funcions habituals d'activació. Realitzem una anàlisi dels valors de les funcions d'activació i mostrem que una fracció significativa està saturada cap a zero o un en un conjunto d'RNNs populars. A continuació, proposem CGPA per podar dinàmicament les activacions de les RNNs a una granularitat gruixuda. CGPA evita l'avaluació de neurones senceres cada vegada que les sortides de neurones parelles estan saturades. CGPA redueix significativament la quantitat de càlculs i accessos a la memòria, aconseguint en mitjana un 12% de millora en el rendiment i estalvi d'energia. Finalment, en l'última contribució d'aquesta tesi ens centrem en metodologies de poda estàtica de les DNNs. La poda redueix la petjada de memòria i el treball computacional mitjançant l'eliminació de connexions o neurones redundants. Tanmateix, mostrem que els esquemes de poda previs fan servir un procés iteratiu molt llarg que requereix l'entrenament de les DNNs moltes vegades per ajustar els paràmetres de poda. A continuació, proposem un esquema de poda basat en l'anàlisi de components principals i la importància relativa de les connexions de cada neurona que optimitza automàticament el DNN optimitzat en un sol tret sense necessitat de sintonitzar manualment múltiples paràmetres
Khan, Muhammad Jazib. "Programmable Address Generation Unit for Deep Neural Network Accelerators". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-271884.
Pełny tekst źródłaConvolutional Neural Networks blir mer och mer populära på grund av deras applikationer inom revolutionerande tekniker som autonom körning, biomedicinsk bildbehandling och naturligt språkbearbetning. Med denna ökning av antagandet ökar också komplexiteten hos underliggande algoritmer. Detta medför implikationer för beräkningsplattformarna såväl som GPU: er, FPGAeller ASIC-baserade acceleratorer, särskilt för Adressgenerationsenheten (AGU) som är ansvarig för minnesåtkomst. Befintliga acceleratorer har normalt Parametrizable Datapath AGU: er som har mycket begränsad anpassningsförmåga till utveckling i algoritmer. Därför krävs ny hårdvara för nya algoritmer, vilket är en mycket ineffektiv metod när det gäller tid, resurser och återanvändbarhet. I denna forskning utvärderas sex algoritmer med olika implikationer för hårdvara för adressgenerering och en helt programmerbar AGU (PAGU) presenteras som kan anpassa sig till dessa algoritmer. Dessa algoritmer är Standard, Strided, Dilated, Upsampled och Padded convolution och MaxPooling. Den föreslagna AGU-arkitekturen är en Very Long Instruction Word-baserad applikationsspecifik instruktionsprocessor som har specialiserade komponenter som hårdvara räknare och noll-overhead-slingor och en kraftfull Instruktionsuppsättning Arkitektur (ISA) som kan modellera statiska och dynamiska begränsningar och affinera och icke-affinerad adress ekvationer. Målet har varit att minimera flexibiliteten kontra avvägning av område, kraft och prestanda. För ett fungerande testnätverk av semantisk segmentering har resultaten visat att PAGU visar nära den perfekta prestanda, 1 cykel per adress, för alla algoritmer som beaktas undantar Upsampled Convolution för vilken det är 1,7 cykler per adress. Området för PAGU är ungefär 4,6 gånger större än Parametrizable Datapath-metoden, vilket fortfarande är rimligt med tanke på de stora flexibilitetsfördelarna. Potentialen för PAGU är inte bara begränsad till neurala nätverksapplikationer utan också i mer allmänna digitala signalbehandlingsområden som kan utforskas i framtiden.
Jalasutram, Rommel. "Acceleration of spiking neural networks on multicore architectures". Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1252424720/.
Pełny tekst źródłaHan, Bing. "ACCELERATION OF SPIKING NEURAL NETWORK ON GENERAL PURPOSE GRAPHICS PROCESSORS". University of Dayton / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1271368713.
Pełny tekst źródłaChen, Yu-Hsin Ph D. Massachusetts Institute of Technology. "Architecture design for highly flexible and energy-efficient deep neural network accelerators". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117838.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 141-147).
Deep neural networks (DNNs) are the backbone of modern artificial intelligence (AI). However, due to their high computational complexity and diverse shapes and sizes, dedicated accelerators that can achieve high performance and energy efficiency across a wide range of DNNs are critical for enabling AI in real-world applications. To address this, we present Eyeriss, a co-design of software and hardware architecture for DNN processing that is optimized for performance, energy efficiency and flexibility. Eyeriss features a novel Row-Stationary (RS) dataflow to minimize data movement when processing a DNN, which is the bottleneck of both performance and energy efficiency. The RS dataflow supports highly-parallel processing while fully exploiting data reuse in a multi-level memory hierarchy to optimize for the overall system energy efficiency given any DNN shape and size. It achieves 1.4x to 2.5x higher energy efficiency than other existing dataflows. To support the RS dataflow, we present two versions of the Eyeriss architecture. Eyeriss v1 targets large DNNs that have plenty of data reuse. It features a flexible mapping strategy for high performance and a multicast on-chip network (NoC) for high data reuse, and further exploits data sparsity to reduce processing element (PE) power by 45% and off-chip bandwidth by up to 1.9x. Fabricated in a 65nm CMOS, Eyeriss v1 consumes 278 mW at 34.7 fps for the CONV layers of AlexNet, which is 10x more efficient than a mobile GPU. Eyeriss v2 addresses support for the emerging compact DNNs that introduce higher variation in data reuse. It features a RS+ dataflow that improves PE utilization, and a flexible and scalable NoC that adapts to the bandwidth requirement while also exploiting available data reuse. Together, they provide over 10x higher throughput than Eyeriss v1 at 256 PEs. Eyeriss v2 also exploits sparsity and SIMD for an additional 6x increase in throughput.
by Yu-Hsin Chen.
Ph. D.
Gaura, Elena Ioana. "Neural network techniques for the control and identification of acceleration sensors". Thesis, Coventry University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313132.
Pełny tekst źródłaKsiążki na temat "Neural network accelerator"
Whitehead, P. A. Design considerations for a hardware accelerator for Kohonen unsupervised learning in artificial neural networks. Manchester: UMIST, 1997.
Znajdź pełny tekst źródłaJones, Steven P. Neural network models of simple mechanical systems illustrating the feasibility of accelerated life testing. [Washington, DC]: National Aeronautics and Space Administration, 1996.
Znajdź pełny tekst źródłaMunir. Accelerators for Convolutional Neural Networks. Wiley & Sons, Limited, John, 2023.
Znajdź pełny tekst źródłaAccelerated training for large feedforward neural networks. Moffett Field, Calif: National Aeronautics and Space Administration, Ames Research Center, 1998.
Znajdź pełny tekst źródłaRaff, Lionel, Ranga Komanduri, Martin Hagan i Satish Bukkapatnam. Neural Networks in Chemical Reaction Dynamics. Oxford University Press, 2012. http://dx.doi.org/10.1093/oso/9780199765652.001.0001.
Pełny tekst źródłaFox, Raymond. The Use of Self. Oxford University Press, 2011. http://dx.doi.org/10.1093/oso/9780190616144.001.0001.
Pełny tekst źródłaKonrad, Kerstin, Adriana Di Martino i Yuta Aoki. Brain volumes and intrinsic brain connectivity in ADHD. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780198739258.003.0006.
Pełny tekst źródłaCzęści książek na temat "Neural network accelerator"
Huang, Hantao, i Hao Yu. "Distributed-Solver for Networked Neural Network". W Compact and Fast Machine Learning Accelerator for IoT Devices, 107–43. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-3323-1_5.
Pełny tekst źródłaNakajima, Toshiya. "Architecture of the Neural Network Simulation Accelerator NEUROSIM/L". W International Neural Network Conference, 722–25. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_61.
Pełny tekst źródłaReagen, Brandon, Robert Adolf, Paul Whatmough, Gu-Yeon Wei i David Brooks. "Neural Network Accelerator Optimization: A Case Study". W Deep Learning for Computer Architects, 43–61. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-031-01756-8_4.
Pełny tekst źródłaHuang, Hantao, i Hao Yu. "Tensor-Solver for Deep Neural Network". W Compact and Fast Machine Learning Accelerator for IoT Devices, 63–105. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-3323-1_4.
Pełny tekst źródłaAe, Tadashi, i Reiji Aibara. "A Neural Network for 3-D VLSI Accelerator". W The Kluwer International Series in Engineering and Computer Science, 179–88. Boston, MA: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4613-1619-0_16.
Pełny tekst źródłaHuang, Hantao, i Hao Yu. "Least-Squares-Solver for Shallow Neural Network". W Compact and Fast Machine Learning Accelerator for IoT Devices, 29–62. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-3323-1_3.
Pełny tekst źródłaHu, Lili. "Frameworks for Efficient Convolutional Neural Network Accelerator on FPGA". W Advances in Intelligent Systems and Computing, 651–57. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8944-2_75.
Pełny tekst źródłaCheung, Kit, Simon R. Schultz i Wayne Luk. "A Large-Scale Spiking Neural Network Accelerator for FPGA Systems". W Artificial Neural Networks and Machine Learning – ICANN 2012, 113–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33269-2_15.
Pełny tekst źródłaWu, Jin, Xiangyang Shi, Wenting Pang i Yu Wang. "Research on FPGA Accelerator Optimization Based on Graph Neural Network". W Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery, 536–42. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-20738-9_61.
Pełny tekst źródłaJin, Shaopeng, Shuo Qi, Yilin Dai i Yihu Hu. "Design of Convolutional Neural Network Accelerator Based on RISC-V". W Lecture Notes on Data Engineering and Communications Technologies, 446–54. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-29097-8_53.
Pełny tekst źródłaStreszczenia konferencji na temat "Neural network accelerator"
Shiflett, Kyle, Dylan Wright, Avinash Karanth i Ahmed Louri. "PIXEL: Photonic Neural Network Accelerator". W 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2020. http://dx.doi.org/10.1109/hpca47549.2020.00046.
Pełny tekst źródłaXu, David, A. Barış Özgüler, Giuseppe Di Guglielmo, Nhan Tran, Gabriel Perdue, Luca Carloni i Farah Fahim. "Neural network accelerator for quantum control". W Neural network accelerator for quantum control. US DOE, 2023. http://dx.doi.org/10.2172/1959815.
Pełny tekst źródłaYang, Zunming, Zhanzhuang He, Jing Yang i Zhong Ma. "An LSTM Acceleration Method Based on Embedded Neural Network Accelerator". W ACAI'21: 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3508546.3508649.
Pełny tekst źródłaYi, Qian. "FPGA Implementation of Neural Network Accelerator". W 2018 2nd IEEE Advanced Information Management,Communicates, Electronic and Automation Control Conference (IMCEC). IEEE, 2018. http://dx.doi.org/10.1109/imcec.2018.8469659.
Pełny tekst źródłaVogt, Michael C. "Neural network-based sensor signal accelerator". W Intelligent Systems and Smart Manufacturing, redaktorzy Peter E. Orban i George K. Knopf. SPIE, 2001. http://dx.doi.org/10.1117/12.417242.
Pełny tekst źródłaWang, Hong, Xiao Zhang, Dehui Kong, Guoning Lu, Degen Zhen, Fang Zhu i Ke Xu. "Convolutional Neural Network Accelerator on FPGA". W 2019 IEEE International Conference on Integrated Circuits, Technologies and Applications (ICTA). IEEE, 2019. http://dx.doi.org/10.1109/icta48799.2019.9012821.
Pełny tekst źródłaXu, David, A. Barls Ozguler, Giuseppe Di Guglielmo, Nhan Tran, Gabriel N. Perdue, Luca Carloni i Farah Fahim. "Neural network accelerator for quantum control". W 2022 IEEE/ACM Third International Workshop on Quantum Computing Software (QCS). IEEE, 2022. http://dx.doi.org/10.1109/qcs56647.2022.00010.
Pełny tekst źródłaMiscuglio, Mario, Zibo Hu, Shurui Li, Puneet Gupta, Hamed Dalir i Volker J. Sorger. "Fourier Optical Convolutional Neural Network Accelerator". W Signal Processing in Photonic Communications. Washington, D.C.: OSA, 2021. http://dx.doi.org/10.1364/sppcom.2021.spm5c.2.
Pełny tekst źródłaLai, Yeong-Kang, i Zheng-Xun Yeh. "An Efficient Convolutional Neural Network Accelerator". W 2023 International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan). IEEE, 2023. http://dx.doi.org/10.1109/icce-taiwan58799.2023.10226679.
Pełny tekst źródłaMody, Mihir, Prithvi Shankar, Veeramanikandan Raju i Sriramakrishnan Govindarajan. "Fail-Safe Neural Network Inference Accelerator". W 2021 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT). IEEE, 2021. http://dx.doi.org/10.1109/conecct52877.2021.9622537.
Pełny tekst źródłaRaporty organizacyjne na temat "Neural network accelerator"
Aimone, James, Christopher Bennett, Suma Cardwell, Ryan Dellana i Tianyao Xiao. Mosaic The Best of Both Worlds: Analog devices with Digital Spiking Communication to build a Hybrid Neural Network Accelerator. Office of Scientific and Technical Information (OSTI), wrzesień 2020. http://dx.doi.org/10.2172/1673175.
Pełny tekst źródłaMorgan, Nelson, Jerome Feldman i John Wawrzynek. Accelerator Systems for Neural Networks, Speech, and Related Applications. Fort Belvoir, VA: Defense Technical Information Center, kwiecień 1995. http://dx.doi.org/10.21236/ada298954.
Pełny tekst źródłaGarg, Raveesh, Eric Qin, Francisco Martinez, Robert Guirado, Akshay Jain, Sergi Abadal, Jose Abellan i in. Understanding the Design Space of Sparse/Dense Multiphase Dataflows for Mapping Graph Neural Networks on Spatial Accelerators. Office of Scientific and Technical Information (OSTI), wrzesień 2021. http://dx.doi.org/10.2172/1821960.
Pełny tekst źródłaWideman, Jr., Robert F., Nicholas B. Anthony, Avigdor Cahaner, Alan Shlosberg, Michel Bellaiche i William B. Roush. Integrated Approach to Evaluating Inherited Predictors of Resistance to Pulmonary Hypertension Syndrome (Ascites) in Fast Growing Broiler Chickens. United States Department of Agriculture, grudzień 2000. http://dx.doi.org/10.32747/2000.7575287.bard.
Pełny tekst źródłaDEEP LEARNING DAMAGE IDENTIFICATION METHOD FOR STEEL- FRAME BRACING STRUCTURES USING TIME–FREQUENCY ANALYSIS AND CONVOLUTIONAL NEURAL NETWORKS. The Hong Kong Institute of Steel Construction, grudzień 2023. http://dx.doi.org/10.18057/ijasc.2023.19.4.8.
Pełny tekst źródła