Добірка наукової літератури з теми "Efficient Neural Networks"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Efficient Neural Networks".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Efficient Neural Networks"
Guo, Qingbei, Xiao-Jun Wu, Josef Kittler, and Zhiquan Feng. "Differentiable neural architecture learning for efficient neural networks." Pattern Recognition 126 (June 2022): 108448. http://dx.doi.org/10.1016/j.patcog.2021.108448.
Повний текст джерелаLiu, Shiya, Dong Sam Ha, Fangyang Shen, and Yang Yi. "Efficient neural networks for edge devices." Computers & Electrical Engineering 92 (June 2021): 107121. http://dx.doi.org/10.1016/j.compeleceng.2021.107121.
Повний текст джерелаLi, Guoqing, Meng Zhang, Jiaojie Li, Feng Lv, and Guodong Tong. "Efficient densely connected convolutional neural networks." Pattern Recognition 109 (January 2021): 107610. http://dx.doi.org/10.1016/j.patcog.2020.107610.
Повний текст джерелаSze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. "Efficient Processing of Deep Neural Networks." Synthesis Lectures on Computer Architecture 15, no. 2 (June 16, 2020): 1–341. http://dx.doi.org/10.2200/s01004ed1v01y202004cac050.
Повний текст джерелаZelenina, Larisa I., D. S. Khripunov, Liudmila E. Khaimina, Evgenii S. Khaimin, and Inga M. Zashikhina. "The Problem of Images’ Classification: Neural Networks." Mathematics and Informatics LXIV, no. 3 (June 30, 2021): 289–300. http://dx.doi.org/10.53656/math2021-3-4-the.
Повний текст джерелаFahim, Houda, Olivier Sawadogo, Nour Alaa, and Mohammed Guedda. "AN EFFICIENT IDENTIFICATION OF RED BLOOD CELL EQUILIBRIUM SHAPE USING NEURAL NETWORKS." Eurasian Journal of Mathematical and Computer Applications 9, no. 2 (June 2021): 39–56. http://dx.doi.org/10.32523/2306-6172-2021-9-2-39-56.
Повний текст джерелаGao, Yuan, Laurence T. Yang, Dehua Zheng, Jing Yang, and Yaliang Zhao. "Quantized Tensor Neural Network." ACM/IMS Transactions on Data Science 2, no. 4 (November 30, 2021): 1–18. http://dx.doi.org/10.1145/3491255.
Повний текст джерелаMarton, Sascha, Stefan Lüdtke, and Christian Bartelt. "Explanations for Neural Networks by Neural Networks." Applied Sciences 12, no. 3 (January 18, 2022): 980. http://dx.doi.org/10.3390/app12030980.
Повний текст джерелаZhang, Huan, Pengchuan Zhang, and Cho-Jui Hsieh. "RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5757–64. http://dx.doi.org/10.1609/aaai.v33i01.33015757.
Повний текст джерелаZhang, Wenqiang, Jiemin Fang, Xinggang Wang, and Wenyu Liu. "EfficientPose: Efficient human pose estimation with neural architecture search." Computational Visual Media 7, no. 3 (April 7, 2021): 335–47. http://dx.doi.org/10.1007/s41095-021-0214-z.
Повний текст джерелаДисертації з теми "Efficient Neural Networks"
Silfa, Franyell. "Energy-efficient architectures for recurrent neural networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671448.
Повний текст джерелаLos algoritmos de aprendizaje profundo han tenido un éxito notable en aplicaciones como el reconocimiento automático de voz y la traducción automática. Por ende, estas aplicaciones son omnipresentes en nuestras vidas y se encuentran en una gran cantidad de dispositivos. Estos algoritmos se componen de Redes Neuronales Profundas (DNN), tales como las Redes Neuronales Convolucionales y Redes Neuronales Recurrentes (RNN), las cuales tienen un gran número de parámetros y cálculos. Por esto implementar DNNs en dispositivos móviles y servidores es un reto debido a los requisitos de memoria y energía. Las RNN se usan para resolver problemas de secuencia a secuencia tales como traducción automática. Estas contienen dependencias de datos entre las ejecuciones de cada time-step, por ello la cantidad de paralelismo es limitado. Por eso la evaluación de RNNs de forma energéticamente eficiente es un reto. En esta tesis se estudian RNNs para mejorar su eficiencia energética en arquitecturas especializadas. Para esto, proponemos técnicas de ahorro energético y arquitecturas de alta eficiencia adaptadas a la evaluación de RNN. Primero, caracterizamos un conjunto de RNN ejecutándose en un SoC. Luego identificamos que acceder a la memoria para leer los pesos es la mayor fuente de consumo energético el cual llega hasta un 80%. Por ende, creamos E-PUR: una unidad de procesamiento para RNN. E-PUR logra una aceleración de 6.8x y mejora el consumo energético en 88x en comparación con el SoC. Esas mejoras se deben a la maximización de la ubicación temporal de los pesos. En E-PUR, la lectura de los pesos representa el mayor consumo energético. Por ende, nos enfocamos en reducir los accesos a la memoria y creamos un esquema que reutiliza resultados calculados previamente. La observación es que al evaluar las secuencias de entrada de un RNN, la salida de una neurona dada tiende a cambiar ligeramente entre evaluaciones consecutivas, por lo que ideamos un esquema que almacena en caché las salidas de las neuronas y las reutiliza cada vez que detecta un cambio pequeño entre el valor de salida actual y el valor previo, lo que evita leer los pesos. Para decidir cuándo usar un cálculo anterior utilizamos una Red Neuronal Binaria (BNN) como predictor de reutilización, dado que su salida está altamente correlacionada con la salida de la RNN. Esta propuesta evita más del 24.2% de los cálculos y reduce el consumo energético promedio en 18.5%. El tamaño de la memoria de los modelos RNN suele reducirse utilizando baja precisión para la evaluación y el almacenamiento de los pesos. En este caso, la precisión mínima utilizada se identifica de forma estática y se establece de manera que la RNN mantenga su exactitud. Normalmente, este método utiliza la misma precisión para todo los cálculos. Sin embargo, observamos que algunos cálculos se pueden evaluar con una precisión menor sin afectar la exactitud. Por eso, ideamos una técnica que selecciona dinámicamente la precisión utilizada para calcular cada time-step. Un reto de esta propuesta es como elegir una precisión menor. Abordamos este problema reconociendo que el resultado de una evaluación previa se puede emplear para determinar la precisión requerida en el time-step actual. Nuestro esquema evalúa el 57% de los cálculos con una precisión menor que la precisión fija empleada por los métodos estáticos. Por último, la evaluación en E-PUR muestra una aceleración de 1.46x con un ahorro de energía promedio de 19.2%
Golea, Mostefa. "On efficient learning algorithms for neural networks." Thesis, University of Ottawa (Canada), 1993. http://hdl.handle.net/10393/6508.
Повний текст джерелаIslam, Taj-ul. "Channel routing : efficient solutions using neural networks /." Online version of thesis, 1993. http://hdl.handle.net/1850/11154.
Повний текст джерелаZhao, Wei. "Efficient neural networks for prediction of turbulent flow." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/16939.
Повний текст джерелаBillings, Rachel Mae. "On Efficient Computer Vision Applications for Neural Networks." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/102957.
Повний текст джерелаMaster of Science
The subject of machine learning and its associated jargon have become ubiquitous in the past decade as industries seek to develop automated tools and applications and researchers continue to develop new methods for artificial intelligence and improve upon existing ones. Neural networks are a type of machine learning algorithm that can make predictions in complex situations based on input data with human-like (or better) accuracy. Real-time, low-power, and low-cost systems using these algorithms are increasingly used in consumer and industry applications, often improving the efficiency of completing mundane and hazardous tasks traditionally performed by humans. The focus of this work is (1) to explore when and why neural networks may make incorrect decisions in the domain of image-based prediction tasks, (2) the demonstration of a low-power, low-cost machine learning use case using a mask recognition system intended to be suitable for deployment in support of COVID-19-related mask regulations, and (3) the investigation of how neural networks may be implemented on resource-limited technology in an efficient manner using an emerging form of computing.
Bozorgmehr, Pouya. "An efficient online feature extraction algorithm for neural networks." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p1470604.
Повний текст джерелаTitle from first page of PDF file (viewed January 13, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 61-63).
Al-Hindi, Khalid A. "Flexible basis function neural networks for efficient analog implementations /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3074367.
Повний текст джерелаEkman, Carl. "Traffic Sign Classification Using Computationally Efficient Convolutional Neural Networks." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157453.
Повний текст джерелаAdamu, Abdullahi S. "An empirical study towards efficient learning in artificial neural networks by neuronal diversity." Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33799/.
Повний текст джерелаEtchells, Terence Anthony. "Rule extraction from neural networks : a practical and efficient approach." Thesis, Liverpool John Moores University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.402847.
Повний текст джерелаКниги з теми "Efficient Neural Networks"
Approximation methods for efficient learning of Bayesian networks. Amsterdam: IOS Press, 2008.
Знайти повний текст джерелаOmohundro, Stephen M. Efficient algorithms with neural network behavior. Urbana, Il (1304 W. Springfield Ave., Urbana 61801): Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1987.
Знайти повний текст джерелаCosta, Álvaro. Evaluating public transport efficiency with neural network models. Loughborough: Loughborough University, Department of Economics, 1996.
Знайти повний текст джерелаMarkellos, Raphael N. Robust estimation of nonlinear production frontiers and efficiency: A neural network approach. Loughborough: Loughborough University, Department of Economics, 1997.
Знайти повний текст джерелаSze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. Efficient Processing of Deep Neural Networks. Morgan & Claypool Publishers, 2020.
Знайти повний текст джерелаSze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. Efficient Processing of Deep Neural Networks. Springer International Publishing AG, 2020.
Знайти повний текст джерелаSze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. Efficient Processing of Deep Neural Networks. Morgan & Claypool Publishers, 2020.
Знайти повний текст джерелаSze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. Efficient Processing of Deep Neural Networks. Morgan & Claypool Publishers, 2020.
Знайти повний текст джерелаGhotra, Manpreet Singh, and Rajdeep Dua. Neural Network Programming with TensorFlow: Unleash the power of TensorFlow to train efficient neural networks. Packt Publishing, 2017.
Знайти повний текст джерелаTakikawa, Masami. Representations and algorithms for efficient inference in Bayesian networks. 1998.
Знайти повний текст джерелаЧастини книг з теми "Efficient Neural Networks"
Awad, Mariette, and Rahul Khanna. "Deep Neural Networks." In Efficient Learning Machines, 127–47. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4302-5990-9_7.
Повний текст джерелаKarayiannis, N. B., and A. N. Venetsanopoulos. "ELEANNE: Efficient LEarning Algorithms for Neural NEtworks." In Artificial Neural Networks, 87–139. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4757-4547-4_3.
Повний текст джерелаFerrari, Enrico, and Marco Muselli. "Efficient Constructive Techniques for Training Switching Neural Networks." In Constructive Neural Networks, 25–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04512-7_2.
Повний текст джерелаBellot, Pau, and Patrick E. Meyer. "Efficient Combination of Pairwise Feature Networks." In Neural Connectomics Challenge, 85–93. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-53070-3_7.
Повний текст джерелаSze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. "Overview of Deep Neural Networks." In Efficient Processing of Deep Neural Networks, 17–39. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-031-01766-7_2.
Повний текст джерелаLee, Hyunjung, Harksoo Kim, and Jungyun Seo. "Efficient Domain Action Classification Using Neural Networks." In Neural Information Processing, 150–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11893257_17.
Повний текст джерелаBahroun, Yanis, Eugénie Hunsicker, and Andrea Soltoggio. "Neural Networks for Efficient Nonlinear Online Clustering." In Neural Information Processing, 316–24. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70087-8_34.
Повний текст джерелаRajalakshmi, Ratnavel, Abhinav Basil Shinow, Aswin Murali, Kashinadh S. Nair, and J. Bhuvana. "An Efficient Convolutional Neural Network with Image Augmentation for Cassava Leaf Disease Detection." In Recurrent Neural Networks, 289–305. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-19.
Повний текст джерелаVan Belle, Vanya, Kristiaan Pelckmans, Johan A. K. Suykens, and Sabine Van Huffel. "MINLIP: Efficient Learning of Transformation Models." In Artificial Neural Networks – ICANN 2009, 60–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04274-4_7.
Повний текст джерелаAlamro, Hayam, Mai Alzamel, Costas S. Iliopoulos, Solon P. Pissis, Steven Watts, and Wing-Kin Sung. "Efficient Identification of k-Closed Strings." In Engineering Applications of Neural Networks, 583–95. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-65172-9_49.
Повний текст джерелаТези доповідей конференцій з теми "Efficient Neural Networks"
Laube, Kevin A., and Andreas Zell. "ShuffleNASNets: Efficient CNN models through modified Efficient Neural Architecture Search." In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852294.
Повний текст джерелаManjunathswamy, B. E., J. Thriveni, K. R. Venugopal, and L. M. Patnaik. "Efficient iris retrieval using neural networks." In 2012 Nirma University International Conference on Engineering (NUiCONE). IEEE, 2012. http://dx.doi.org/10.1109/nuicone.2012.6493176.
Повний текст джерелаYan, Bencheng, Chaokun Wang, Gaoyang Guo, and Yunkai Lou. "TinyGNN: Learning Efficient Graph Neural Networks." In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394486.3403236.
Повний текст джерелаMiltrup, Matthias, and Georg Schnitger. "Neural networks and efficient associative memory." In the eleventh annual conference. New York, New York, USA: ACM Press, 1998. http://dx.doi.org/10.1145/279943.279991.
Повний текст джерелаKaylani, A., M. Georgiopoulos, M. Mollaghasemi, and G. C. Anagnostopoulos. "Efficient evolution of ART neural networks." In 2008 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2008. http://dx.doi.org/10.1109/cec.2008.4631265.
Повний текст джерелаNagarajan, Amrit, Jacob R. Stevens, and Anand Raghunathan. "Efficient ensembles of graph neural networks." In DAC '22: 59th ACM/IEEE Design Automation Conference. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3489517.3530416.
Повний текст джерелаKoiran, Pascal. "Efficient learning of continuous neural networks." In the seventh annual conference. New York, New York, USA: ACM Press, 1994. http://dx.doi.org/10.1145/180139.181177.
Повний текст джерелаKopuklu, Okan, Neslihan Kose, Ahmet Gunduz, and Gerhard Rigoll. "Resource Efficient 3D Convolutional Neural Networks." In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. http://dx.doi.org/10.1109/iccvw.2019.00240.
Повний текст джерелаYoung, Steven R., Pravallika Devineni, Maryam Parsa, J. Travis Johnston, Bill Kay, Robert M. Patton, Catherine D. Schuman, Derek C. Rose, and Thomas E. Potok. "Evolving Energy Efficient Convolutional Neural Networks." In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9006239.
Повний текст джерелаKim, Minsik, Cheonjun Park, Sungjun Kim, Taeyoung Hong, and Won Woo Ro. "Efficient Dilated-Winograd Convolutional Neural Networks." In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8803277.
Повний текст джерелаЗвіти організацій з теми "Efficient Neural Networks"
Semerikov, Serhiy, Illia Teplytskyi, Yuliia Yechkalo, Oksana Markova, Vladimir Soloviev, and Arnold Kiv. Computer Simulation of Neural Networks Using Spreadsheets: Dr. Anderson, Welcome Back. [б. в.], June 2019. http://dx.doi.org/10.31812/123456789/3178.
Повний текст джерелаWang, Felix, Nick Alonso, and Corinne Teeter. Combining Spike Time Dependent Plasticity (STDP) and Backpropagation (BP) for Robust and Data Efficient Spiking Neural Networks (SNN). Office of Scientific and Technical Information (OSTI), December 2022. http://dx.doi.org/10.2172/1902866.
Повний текст джерелаYu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang, and Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, December 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.
Повний текст джерелаYaroshchuk, Svitlana O., Nonna N. Shapovalova, Andrii M. Striuk, Olena H. Rybalchenko, Iryna O. Dotsenko, and Svitlana V. Bilashenko. Credit scoring model for microfinance organizations. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3683.
Повний текст джерелаGage, Harmon J. Using Upper Layer Weights to Efficiently Construct and Train Feedforward Neural Networks Executing Backpropagation. Fort Belvoir, VA: Defense Technical Information Center, March 2011. http://dx.doi.org/10.21236/ada545618.
Повний текст джерелаDeStefano, Zachary Louis. SNNzkSNARK An Efficient Design and Implementation of a Secure Neural Network Verification System Using zkSNARKs. Office of Scientific and Technical Information (OSTI), January 2020. http://dx.doi.org/10.2172/1583147.
Повний текст джерелаMiles, Gaines E., Yael Edan, F. Tom Turpin, Avshalom Grinstein, Thomas N. Jordan, Amots Hetzroni, Stephen C. Weller, Marvin M. Schreiber, and Okan K. Ersoy. Expert Sensor for Site Specification Application of Agricultural Chemicals. United States Department of Agriculture, August 1995. http://dx.doi.org/10.32747/1995.7570567.bard.
Повний текст джерелаVenedicto, Melissa, and Cheng-Yu Lai. Facilitated Release of Doxorubicin from Biodegradable Mesoporous Silica Nanoparticles. Florida International University, October 2021. http://dx.doi.org/10.25148/mmeurs.009774.
Повний текст джерелаDownard, Alicia, Stephen Semmens, and Bryant Robbins. Automated characterization of ridge-swale patterns along the Mississippi River. Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40439.
Повний текст джерела