Literatura académica sobre el tema "Neural network accelerator"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Neural network accelerator".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Neural network accelerator"
Eliahu, Adi, Ronny Ronen, Pierre-Emmanuel Gaillardon y Shahar Kvatinsky. "multiPULPly". ACM Journal on Emerging Technologies in Computing Systems 17, n.º 2 (abril de 2021): 1–27. http://dx.doi.org/10.1145/3432815.
Texto completoCho, Jaechan, Yongchul Jung, Seongjoo Lee y Yunho Jung. "Reconfigurable Binary Neural Network Accelerator with Adaptive Parallelism Scheme". Electronics 10, n.º 3 (20 de enero de 2021): 230. http://dx.doi.org/10.3390/electronics10030230.
Texto completoHong, JiUn, Saad Arslan, TaeGeon Lee y HyungWon Kim. "Design of Power-Efficient Training Accelerator for Convolution Neural Networks". Electronics 10, n.º 7 (26 de marzo de 2021): 787. http://dx.doi.org/10.3390/electronics10070787.
Texto completoNoskova, E. S., I. E. Zakharov, Y. N. Shkandybin y S. G. Rykovanov. "Towards energy-efficient neural network calculations". Computer Optics 46, n.º 1 (febrero de 2022): 160–66. http://dx.doi.org/10.18287/2412-6179-co-914.
Texto completoFerianc, Martin, Hongxiang Fan, Divyansh Manocha, Hongyu Zhou, Shuanglong Liu, Xinyu Niu y Wayne Luk. "Improving Performance Estimation for Design Space Exploration for Convolutional Neural Network Accelerators". Electronics 10, n.º 4 (23 de febrero de 2021): 520. http://dx.doi.org/10.3390/electronics10040520.
Texto completoSunny, Febin P., Asif Mirza, Mahdi Nikdast y Sudeep Pasricha. "ROBIN: A Robust Optical Binary Neural Network Accelerator". ACM Transactions on Embedded Computing Systems 20, n.º 5s (31 de octubre de 2021): 1–24. http://dx.doi.org/10.1145/3476988.
Texto completoAnmin, Kong y Zhao Bin. "A Parallel Loading Based Accelerator for Convolution Neural Network". International Journal of Machine Learning and Computing 10, n.º 5 (5 de octubre de 2020): 669–74. http://dx.doi.org/10.18178/ijmlc.2020.10.5.989.
Texto completoXia, Chengpeng, Yawen Chen, Haibo Zhang, Hao Zhang, Fei Dai y Jigang Wu. "Efficient neural network accelerators with optical computing and communication". Computer Science and Information Systems, n.º 00 (2022): 66. http://dx.doi.org/10.2298/csis220131066x.
Texto completoTang, Wenkai y Peiyong Zhang. "GPGCN: A General-Purpose Graph Convolution Neural Network Accelerator Based on RISC-V ISA Extension". Electronics 11, n.º 22 (21 de noviembre de 2022): 3833. http://dx.doi.org/10.3390/electronics11223833.
Texto completoAn, Fubang, Lingli Wang y Xuegong Zhou. "A High Performance Reconfigurable Hardware Architecture for Lightweight Convolutional Neural Network". Electronics 12, n.º 13 (27 de junio de 2023): 2847. http://dx.doi.org/10.3390/electronics12132847.
Texto completoTesis sobre el tema "Neural network accelerator"
Tianxu, Yue. "Convolutional Neural Network FPGA-accelerator on Intel DE10-Standard FPGA". Thesis, Linköpings universitet, Elektroniska Kretsar och System, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178174.
Texto completoOudrhiri, Ali. "Performance of a Neural Network Accelerator Architecture and its Optimization Using a Pipeline-Based Approach". Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS658.pdf.
Texto completoIn recent years, neural networks have gained widespread popularity for their versatility and effectiveness in solving a wide range of complex tasks. Their ability to learn and make predictions from large data-sets has revolutionized various fields. However, as neural networks continue to find applications in an ever-expanding array of domains, their significant computational requirements become a pressing challenge. This computational demand is particularly problematic when deploying neural networks in resource-constrained embedded devices, especially within the context of edge computing for inference tasks. Nowadays, neural network accelerator chips emerge as the optimal choice for supporting neural networks at the edge. These chips offer remarkable efficiency with their compact size, low power consumption, and reduced latency. Moreover, the fact that they are integrated on the same chip environment also enhances security by minimizing external data communication. In the frame of edge computing, diverse requirements have emerged, necessitating trade-offs in various performance aspects. This has led to the development of accelerator architectures that are highly configurable, allowing them to adapt to distinct performance demands. In this context, the focus lies on Gemini, a configurable inference neural network accelerator designed with imposed architecture and implemented using High-Level Synthesis techniques. The considerations for its design and implementation were driven by the need for parallelization configurability and performance optimization. Once this accelerator was designed, demonstrating the power of its configurability became essential, helping users select the most suitable architecture for their neural networks. To achieve this objective, this thesis contributed to the development of a performance prediction strategy operating at a high-level of abstraction, which considers the chosen architecture and neural network configuration. This tool assists clients in making decisions regarding the appropriate architecture for their specific neural network applications. During the research, we noticed that using one accelerator presents several limits and that increasing parallelism had limitations on performances. Consequently, we adopted a new strategy for optimizing neural network acceleration. This time, we took a high-level approach that did not require fine-grained accelerator optimizations. We organized multiple Gemini instances into a pipeline and allocated layers to different accelerators to maximize performance. We proposed solutions for two scenarios: a user scenario where the pipeline structure is predefined with a fixed number of accelerators, accelerator configurations, and RAM sizes. We proposed solutions to map the layers on the different accelerators to optimise the execution performance. We did the same for a designer scenario, where the pipeline structure is not fixed, this time it is allowed to choose the number and configuration of the accelerators to optimize the execution and also hardware performances. This pipeline strategy has proven to be effective for the Gemini accelerator. Although this thesis originated from a specific industrial need, certain solutions developed during the research can be applied or adapted to other neural network accelerators. Notably, the performance prediction strategy and high-level optimization of NN processing through pipelining multiple instances offer valuable insights for broader application
Maltoni, Pietro. "Progetto di un acceleratore hardware per layer di convoluzioni depthwise in applicazioni di Deep Neural Network". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24205/.
Texto completoXu, Hongjie. "Energy-Efficient On-Chip Cache Architectures and Deep Neural Network Accelerators Considering the Cost of Data Movement". Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263786.
Texto completo京都大学
新制・課程博士
博士(情報学)
甲第23325号
情博第761号
京都大学大学院情報学研究科通信情報システム専攻
(主査)教授 小野寺 秀俊, 教授 大木 英司, 教授 佐藤 高史
学位規則第4条第1項該当
Doctor of Informatics
Kyoto University
DFAM
Riera, Villanueva Marc. "Low-power accelerators for cognitive computing". Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/669828.
Texto completoLes xarxes neuronals profundes (DNN) han aconseguit un èxit enorme en aplicacions cognitives, i són especialment eficients en problemes de classificació i presa de decisions com ara reconeixement de veu o traducció automàtica. Els dispositius mòbils depenen cada cop més de les DNNs per entendre el món. Els telèfons i rellotges intel·ligents, o fins i tot els cotxes, realitzen diàriament tasques discriminatòries com ara el reconeixement de rostres o objectes. Malgrat la popularitat creixent de les DNNs, el seu funcionament en sistemes mòbils presenta diversos reptes: proporcionar una alta precisió i rendiment amb un petit pressupost de memòria i energia. Les DNNs modernes consisteixen en milions de paràmetres que requereixen recursos computacionals i de memòria enormes i, per tant, no es poden utilitzar directament en sistemes de baixa potència amb recursos limitats. L'objectiu d'aquesta tesi és abordar aquests problemes i proposar noves solucions per tal de dissenyar acceleradors eficients per a sistemes de computació cognitiva basats en DNNs. En primer lloc, ens centrem en optimitzar la inferència de les DNNs per a aplicacions de processament de seqüències. Realitzem una anàlisi de la similitud de les entrades entre execucions consecutives de les DNNs. A continuació, proposem DISC, un accelerador que implementa una tècnica de càlcul diferencial, basat en l'alt grau de semblança de les entrades, per reutilitzar els càlculs de l'execució anterior, en lloc de computar tota la xarxa. Observem que, de mitjana, més del 60% de les entrades de qualsevol capa de les DNNs utilitzades presenten canvis menors respecte a l'execució anterior. Evitar els accessos de memòria i càlculs d'aquestes entrades comporta un estalvi d'energia del 63% de mitjana. En segon lloc, proposem optimitzar la inferència de les DNNs basades en capes FC. Primer analitzem el nombre de pesos únics per neurona d'entrada en diverses xarxes. Aprofitant optimitzacions comunes com la quantització lineal, observem un nombre molt reduït de pesos únics per entrada en diverses capes FC de DNNs modernes. A continuació, per millorar l'eficiència energètica del càlcul de les capes FC, presentem CREW, un accelerador que implementa un eficient mecanisme de reutilització de càlculs i emmagatzematge dels pesos. CREW redueix el nombre de multiplicacions i proporciona estalvis importants en l'ús de la memòria. Avaluem CREW en un conjunt divers de DNNs modernes. CREW proporciona, de mitjana, una millora en rendiment de 2,61x i un estalvi d'energia de 2,42x. En tercer lloc, proposem un mecanisme per optimitzar la inferència de les RNNs. Les cel·les de les xarxes recurrents realitzen multiplicacions element a element de les activacions de diferents comportes, sigmoides i tanh sent les funcions habituals d'activació. Realitzem una anàlisi dels valors de les funcions d'activació i mostrem que una fracció significativa està saturada cap a zero o un en un conjunto d'RNNs populars. A continuació, proposem CGPA per podar dinàmicament les activacions de les RNNs a una granularitat gruixuda. CGPA evita l'avaluació de neurones senceres cada vegada que les sortides de neurones parelles estan saturades. CGPA redueix significativament la quantitat de càlculs i accessos a la memòria, aconseguint en mitjana un 12% de millora en el rendiment i estalvi d'energia. Finalment, en l'última contribució d'aquesta tesi ens centrem en metodologies de poda estàtica de les DNNs. La poda redueix la petjada de memòria i el treball computacional mitjançant l'eliminació de connexions o neurones redundants. Tanmateix, mostrem que els esquemes de poda previs fan servir un procés iteratiu molt llarg que requereix l'entrenament de les DNNs moltes vegades per ajustar els paràmetres de poda. A continuació, proposem un esquema de poda basat en l'anàlisi de components principals i la importància relativa de les connexions de cada neurona que optimitza automàticament el DNN optimitzat en un sol tret sense necessitat de sintonitzar manualment múltiples paràmetres
Khan, Muhammad Jazib. "Programmable Address Generation Unit for Deep Neural Network Accelerators". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-271884.
Texto completoConvolutional Neural Networks blir mer och mer populära på grund av deras applikationer inom revolutionerande tekniker som autonom körning, biomedicinsk bildbehandling och naturligt språkbearbetning. Med denna ökning av antagandet ökar också komplexiteten hos underliggande algoritmer. Detta medför implikationer för beräkningsplattformarna såväl som GPU: er, FPGAeller ASIC-baserade acceleratorer, särskilt för Adressgenerationsenheten (AGU) som är ansvarig för minnesåtkomst. Befintliga acceleratorer har normalt Parametrizable Datapath AGU: er som har mycket begränsad anpassningsförmåga till utveckling i algoritmer. Därför krävs ny hårdvara för nya algoritmer, vilket är en mycket ineffektiv metod när det gäller tid, resurser och återanvändbarhet. I denna forskning utvärderas sex algoritmer med olika implikationer för hårdvara för adressgenerering och en helt programmerbar AGU (PAGU) presenteras som kan anpassa sig till dessa algoritmer. Dessa algoritmer är Standard, Strided, Dilated, Upsampled och Padded convolution och MaxPooling. Den föreslagna AGU-arkitekturen är en Very Long Instruction Word-baserad applikationsspecifik instruktionsprocessor som har specialiserade komponenter som hårdvara räknare och noll-overhead-slingor och en kraftfull Instruktionsuppsättning Arkitektur (ISA) som kan modellera statiska och dynamiska begränsningar och affinera och icke-affinerad adress ekvationer. Målet har varit att minimera flexibiliteten kontra avvägning av område, kraft och prestanda. För ett fungerande testnätverk av semantisk segmentering har resultaten visat att PAGU visar nära den perfekta prestanda, 1 cykel per adress, för alla algoritmer som beaktas undantar Upsampled Convolution för vilken det är 1,7 cykler per adress. Området för PAGU är ungefär 4,6 gånger större än Parametrizable Datapath-metoden, vilket fortfarande är rimligt med tanke på de stora flexibilitetsfördelarna. Potentialen för PAGU är inte bara begränsad till neurala nätverksapplikationer utan också i mer allmänna digitala signalbehandlingsområden som kan utforskas i framtiden.
Jalasutram, Rommel. "Acceleration of spiking neural networks on multicore architectures". Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1252424720/.
Texto completoHan, Bing. "ACCELERATION OF SPIKING NEURAL NETWORK ON GENERAL PURPOSE GRAPHICS PROCESSORS". University of Dayton / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1271368713.
Texto completoChen, Yu-Hsin Ph D. Massachusetts Institute of Technology. "Architecture design for highly flexible and energy-efficient deep neural network accelerators". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117838.
Texto completoThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 141-147).
Deep neural networks (DNNs) are the backbone of modern artificial intelligence (AI). However, due to their high computational complexity and diverse shapes and sizes, dedicated accelerators that can achieve high performance and energy efficiency across a wide range of DNNs are critical for enabling AI in real-world applications. To address this, we present Eyeriss, a co-design of software and hardware architecture for DNN processing that is optimized for performance, energy efficiency and flexibility. Eyeriss features a novel Row-Stationary (RS) dataflow to minimize data movement when processing a DNN, which is the bottleneck of both performance and energy efficiency. The RS dataflow supports highly-parallel processing while fully exploiting data reuse in a multi-level memory hierarchy to optimize for the overall system energy efficiency given any DNN shape and size. It achieves 1.4x to 2.5x higher energy efficiency than other existing dataflows. To support the RS dataflow, we present two versions of the Eyeriss architecture. Eyeriss v1 targets large DNNs that have plenty of data reuse. It features a flexible mapping strategy for high performance and a multicast on-chip network (NoC) for high data reuse, and further exploits data sparsity to reduce processing element (PE) power by 45% and off-chip bandwidth by up to 1.9x. Fabricated in a 65nm CMOS, Eyeriss v1 consumes 278 mW at 34.7 fps for the CONV layers of AlexNet, which is 10x more efficient than a mobile GPU. Eyeriss v2 addresses support for the emerging compact DNNs that introduce higher variation in data reuse. It features a RS+ dataflow that improves PE utilization, and a flexible and scalable NoC that adapts to the bandwidth requirement while also exploiting available data reuse. Together, they provide over 10x higher throughput than Eyeriss v1 at 256 PEs. Eyeriss v2 also exploits sparsity and SIMD for an additional 6x increase in throughput.
by Yu-Hsin Chen.
Ph. D.
Gaura, Elena Ioana. "Neural network techniques for the control and identification of acceleration sensors". Thesis, Coventry University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313132.
Texto completoLibros sobre el tema "Neural network accelerator"
Whitehead, P. A. Design considerations for a hardware accelerator for Kohonen unsupervised learning in artificial neural networks. Manchester: UMIST, 1997.
Buscar texto completoJones, Steven P. Neural network models of simple mechanical systems illustrating the feasibility of accelerated life testing. [Washington, DC]: National Aeronautics and Space Administration, 1996.
Buscar texto completoA, Daglis I., ed. Effects of space weather on technology infrastructure. Dordrecht: Kluwer Academic Publishers, 2004.
Buscar texto completoKong, Joonho y Mahmood Azhar Qureshi. Accelerators for Convolutional Neural Networks. Wiley & Sons, Incorporated, John, 2023.
Buscar texto completoKong, Joonho y Mahmood Azhar Qureshi. Accelerators for Convolutional Neural Networks. Wiley & Sons, Incorporated, John, 2023.
Buscar texto completoKong, Joonho y Mahmood Azhar Qureshi. Accelerators for Convolutional Neural Networks. Wiley & Sons, Incorporated, John, 2023.
Buscar texto completoMunir. Accelerators for Convolutional Neural Networks. Wiley & Sons, Limited, John, 2023.
Buscar texto completoAccelerated training for large feedforward neural networks. Moffett Field, Calif: National Aeronautics and Space Administration, Ames Research Center, 1998.
Buscar texto completoRaff, Lionel, Ranga Komanduri, Martin Hagan y Satish Bukkapatnam. Neural Networks in Chemical Reaction Dynamics. Oxford University Press, 2012. http://dx.doi.org/10.1093/oso/9780199765652.001.0001.
Texto completoAI Ladder: Accelerate Your Journey to AI. O'Reilly Media, Incorporated, 2020.
Buscar texto completoCapítulos de libros sobre el tema "Neural network accelerator"
Huang, Hantao y Hao Yu. "Distributed-Solver for Networked Neural Network". En Compact and Fast Machine Learning Accelerator for IoT Devices, 107–43. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-3323-1_5.
Texto completoNakajima, Toshiya. "Architecture of the Neural Network Simulation Accelerator NEUROSIM/L". En International Neural Network Conference, 722–25. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_61.
Texto completoReagen, Brandon, Robert Adolf, Paul Whatmough, Gu-Yeon Wei y David Brooks. "Neural Network Accelerator Optimization: A Case Study". En Deep Learning for Computer Architects, 43–61. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-031-01756-8_4.
Texto completoHuang, Hantao y Hao Yu. "Tensor-Solver for Deep Neural Network". En Compact and Fast Machine Learning Accelerator for IoT Devices, 63–105. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-3323-1_4.
Texto completoAe, Tadashi y Reiji Aibara. "A Neural Network for 3-D VLSI Accelerator". En The Kluwer International Series in Engineering and Computer Science, 179–88. Boston, MA: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4613-1619-0_16.
Texto completoHuang, Hantao y Hao Yu. "Least-Squares-Solver for Shallow Neural Network". En Compact and Fast Machine Learning Accelerator for IoT Devices, 29–62. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-3323-1_3.
Texto completoHu, Lili. "Frameworks for Efficient Convolutional Neural Network Accelerator on FPGA". En Advances in Intelligent Systems and Computing, 651–57. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8944-2_75.
Texto completoCheung, Kit, Simon R. Schultz y Wayne Luk. "A Large-Scale Spiking Neural Network Accelerator for FPGA Systems". En Artificial Neural Networks and Machine Learning – ICANN 2012, 113–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33269-2_15.
Texto completoWu, Jin, Xiangyang Shi, Wenting Pang y Yu Wang. "Research on FPGA Accelerator Optimization Based on Graph Neural Network". En Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery, 536–42. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-20738-9_61.
Texto completoJin, Shaopeng, Shuo Qi, Yilin Dai y Yihu Hu. "Design of Convolutional Neural Network Accelerator Based on RISC-V". En Lecture Notes on Data Engineering and Communications Technologies, 446–54. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-29097-8_53.
Texto completoActas de conferencias sobre el tema "Neural network accelerator"
Shiflett, Kyle, Dylan Wright, Avinash Karanth y Ahmed Louri. "PIXEL: Photonic Neural Network Accelerator". En 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2020. http://dx.doi.org/10.1109/hpca47549.2020.00046.
Texto completoXu, David, A. Barış Özgüler, Giuseppe Di Guglielmo, Nhan Tran, Gabriel Perdue, Luca Carloni y Farah Fahim. "Neural network accelerator for quantum control". En Neural network accelerator for quantum control. US DOE, 2023. http://dx.doi.org/10.2172/1959815.
Texto completoYang, Zunming, Zhanzhuang He, Jing Yang y Zhong Ma. "An LSTM Acceleration Method Based on Embedded Neural Network Accelerator". En ACAI'21: 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3508546.3508649.
Texto completoYi, Qian. "FPGA Implementation of Neural Network Accelerator". En 2018 2nd IEEE Advanced Information Management,Communicates, Electronic and Automation Control Conference (IMCEC). IEEE, 2018. http://dx.doi.org/10.1109/imcec.2018.8469659.
Texto completoVogt, Michael C. "Neural network-based sensor signal accelerator". En Intelligent Systems and Smart Manufacturing, editado por Peter E. Orban y George K. Knopf. SPIE, 2001. http://dx.doi.org/10.1117/12.417242.
Texto completoWang, Hong, Xiao Zhang, Dehui Kong, Guoning Lu, Degen Zhen, Fang Zhu y Ke Xu. "Convolutional Neural Network Accelerator on FPGA". En 2019 IEEE International Conference on Integrated Circuits, Technologies and Applications (ICTA). IEEE, 2019. http://dx.doi.org/10.1109/icta48799.2019.9012821.
Texto completoXu, David, A. Barls Ozguler, Giuseppe Di Guglielmo, Nhan Tran, Gabriel N. Perdue, Luca Carloni y Farah Fahim. "Neural network accelerator for quantum control". En 2022 IEEE/ACM Third International Workshop on Quantum Computing Software (QCS). IEEE, 2022. http://dx.doi.org/10.1109/qcs56647.2022.00010.
Texto completoMiscuglio, Mario, Zibo Hu, Shurui Li, Puneet Gupta, Hamed Dalir y Volker J. Sorger. "Fourier Optical Convolutional Neural Network Accelerator". En Signal Processing in Photonic Communications. Washington, D.C.: OSA, 2021. http://dx.doi.org/10.1364/sppcom.2021.spm5c.2.
Texto completoLai, Yeong-Kang y Zheng-Xun Yeh. "An Efficient Convolutional Neural Network Accelerator". En 2023 International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan). IEEE, 2023. http://dx.doi.org/10.1109/icce-taiwan58799.2023.10226679.
Texto completoMody, Mihir, Prithvi Shankar, Veeramanikandan Raju y Sriramakrishnan Govindarajan. "Fail-Safe Neural Network Inference Accelerator". En 2021 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT). IEEE, 2021. http://dx.doi.org/10.1109/conecct52877.2021.9622537.
Texto completoInformes sobre el tema "Neural network accelerator"
Aimone, James, Christopher Bennett, Suma Cardwell, Ryan Dellana y Tianyao Xiao. Mosaic The Best of Both Worlds: Analog devices with Digital Spiking Communication to build a Hybrid Neural Network Accelerator. Office of Scientific and Technical Information (OSTI), septiembre de 2020. http://dx.doi.org/10.2172/1673175.
Texto completoMorgan, Nelson, Jerome Feldman y John Wawrzynek. Accelerator Systems for Neural Networks, Speech, and Related Applications. Fort Belvoir, VA: Defense Technical Information Center, abril de 1995. http://dx.doi.org/10.21236/ada298954.
Texto completoGarg, Raveesh, Eric Qin, Francisco Martinez, Robert Guirado, Akshay Jain, Sergi Abadal, Jose Abellan et al. Understanding the Design Space of Sparse/Dense Multiphase Dataflows for Mapping Graph Neural Networks on Spatial Accelerators. Office of Scientific and Technical Information (OSTI), septiembre de 2021. http://dx.doi.org/10.2172/1821960.
Texto completoWideman, Jr., Robert F., Nicholas B. Anthony, Avigdor Cahaner, Alan Shlosberg, Michel Bellaiche y William B. Roush. Integrated Approach to Evaluating Inherited Predictors of Resistance to Pulmonary Hypertension Syndrome (Ascites) in Fast Growing Broiler Chickens. United States Department of Agriculture, diciembre de 2000. http://dx.doi.org/10.32747/2000.7575287.bard.
Texto completoDEEP LEARNING DAMAGE IDENTIFICATION METHOD FOR STEEL- FRAME BRACING STRUCTURES USING TIME–FREQUENCY ANALYSIS AND CONVOLUTIONAL NEURAL NETWORKS. The Hong Kong Institute of Steel Construction, diciembre de 2023. http://dx.doi.org/10.18057/ijasc.2023.19.4.8.
Texto completo