Letteratura scientifica selezionata sul tema "Approximate identity neural networks"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Approximate identity neural networks".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Approximate identity neural networks"
Moon, Sunghwan. "ReLU Network with Bounded Width Is a Universal Approximator in View of an Approximate Identity". Applied Sciences 11, n. 1 (4 gennaio 2021): 427. http://dx.doi.org/10.3390/app11010427.
Testo completoFunahashi, Ken-Ichi. "Approximate realization of identity mappings by three-layer neural networks". Electronics and Communications in Japan (Part III: Fundamental Electronic Science) 73, n. 11 (1990): 61–68. http://dx.doi.org/10.1002/ecjc.4430731107.
Testo completoZainuddin, Zarita, e Saeed Panahian Fard. "The Universal Approximation Capabilities of Cylindrical Approximate Identity Neural Networks". Arabian Journal for Science and Engineering 41, n. 8 (4 marzo 2016): 3027–34. http://dx.doi.org/10.1007/s13369-016-2067-9.
Testo completoTurchetti, C., M. Conti, P. Crippa e S. Orcioni. "On the approximation of stochastic processes by approximate identity neural networks". IEEE Transactions on Neural Networks 9, n. 6 (1998): 1069–85. http://dx.doi.org/10.1109/72.728353.
Testo completoConti, M., e C. Turchetti. "Approximate identity neural networks for analog synthesis of nonlinear dynamical systems". IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 41, n. 12 (1994): 841–58. http://dx.doi.org/10.1109/81.340846.
Testo completoFard, Saeed Panahian, e Zarita Zainuddin. "Almost everywhere approximation capabilities of double Mellin approximate identity neural networks". Soft Computing 20, n. 11 (2 luglio 2015): 4439–47. http://dx.doi.org/10.1007/s00500-015-1753-y.
Testo completoPanahian Fard, Saeed, e Zarita Zainuddin. "The universal approximation capabilities of double 2 $$\pi $$ π -periodic approximate identity neural networks". Soft Computing 19, n. 10 (6 settembre 2014): 2883–90. http://dx.doi.org/10.1007/s00500-014-1449-8.
Testo completoPanahian Fard, Saeed, e Zarita Zainuddin. "Analyses for L p [a, b]-norm approximation capability of flexible approximate identity neural networks". Neural Computing and Applications 24, n. 1 (8 ottobre 2013): 45–50. http://dx.doi.org/10.1007/s00521-013-1493-9.
Testo completoDiMattina, Christopher, e Kechen Zhang. "How to Modify a Neural Network Gradually Without Changing Its Input-Output Functionality". Neural Computation 22, n. 1 (gennaio 2010): 1–47. http://dx.doi.org/10.1162/neco.2009.05-08-781.
Testo completoGermani, S., G. Tosti, P. Lubrano, S. Cutini, I. Mereu e A. Berretta. "Artificial Neural Network classification of 4FGL sources". Monthly Notices of the Royal Astronomical Society 505, n. 4 (24 giugno 2021): 5853–61. http://dx.doi.org/10.1093/mnras/stab1748.
Testo completoTesi sul tema "Approximate identity neural networks"
Ling, Hong. "Implementation of Stochastic Neural Networks for Approximating Random Processes". Master's thesis, Lincoln University. Environment, Society and Design Division, 2007. http://theses.lincoln.ac.nz/public/adt-NZLIU20080108.124352/.
Testo completoGarces, Freddy. "Dynamic neural networks for approximate input- output linearisation-decoupling of dynamic systems". Thesis, University of Reading, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368662.
Testo completoLi, Yingzhen. "Approximate inference : new visions". Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277549.
Testo completoLiu, Leo M. Eng Massachusetts Institute of Technology. "Acoustic models for speech recognition using Deep Neural Networks based on approximate math". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100633.
Testo completoThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 81-83).
Deep Neural Networks (DNNs) are eective models for machine learning. Unfortunately, training a DNN is extremely time-consuming, even with the aid of a graphics processing unit (GPU). DNN training is especially slow for tasks with large datasets. Existing approaches for speeding up the process involve parallelizing the Stochastic Gradient Descent (SGD) algorithm used to train DNNs. Those approaches do not guarantee the same results as normal SGD since they introduce non-trivial changes into the algorithm. A new approach for faster training that avoids signicant changes to SGD is to use low-precision hardware. The low-precision hardware is faster than a GPU, but it performs arithmetic with 1% error. In this arithmetic, 98 + 2 = 99:776 and 10 * 10 = 100:863. This thesis determines whether DNNs would still be able to produce state-of-the-art results using this low-precision arithmetic. To answer this question, we implement an approximate DNN that uses the low-precision arithmetic and evaluate it on the TIMIT phoneme recognition task and the WSJ speech recognition task. For both tasks, we nd that acoustic models based on approximate DNNs perform as well as ones based on conventional DNNs; both produce similar recognition error rates. The approximate DNN is able to match the conventional DNN only if it uses Kahan summations to preserve precision. These results show that DNNs can run on low-precision hardware without the arithmetic causing any loss in recognition ability. The low-precision hardware is therefore a suitable approach for speeding up DNN training.
by Leo Liu.
M. Eng.
Scotti, Andrea. "Graph Neural Networks and Learned Approximate Message Passing Algorithms for Massive MIMO Detection". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284500.
Testo completoMassiv MIMO (multiple-input and multiple-output) är en metod som förbättrarprestandan i trådlösa kommunikationssystem genom att ett stort antal antenneranvänds i både sändare och mottagare. I den femte generationens (5G)mobila kommunikationssystem är Massiv MIMO en mycket viktig teknologiför att möta det växande antalet mobilanvändare och tillgodose användarnasbehov. Samtidigt ökar beräkningskomplexiteten för att återfinna den överfördainformationen i en trådlös Massiv MIMO-upplänk när antalet antenner ökar.Faktum är att den optimala ML-detektorn (maximum likelihood) har en beräkningskomplexitetsom ökar exponentiellt med antalet sändare. En av huvudutmaningarnainom detta område är därför att hitta den bästa suboptimalaMIMO-detekteringsalgoritmen med hänsyn till både prestanda och komplexitet.I detta arbete visar vi hur MIMO-detektering kan representeras av ett MarkovRandom Field (MRF) och använder loopy belief-fortplantning (LBP) föratt lösa det motsvarande MAP-slutledningsproblemet (maximum a posteriori).Vi föreslår sedan en ny algoritm (BP-MMSE) som kombinerar LBP ochMMSE (minimum mean square error) för att lösa problemet vid högre modulationsordningarsom QAM-16 (kvadratamplitudsmodulation) och QAM-64.För att undvika komplexiteten med att beräkna MMSE så använder vi oss avgraf neurala nätverk (GNN) för att lära en message-passing algoritm som löserslutledningsproblemet med samma graf. En message-passing algoritm måstegiven en komplett graf utbyta kvadraten av antalet noder meddelanden. För attminska message-passing algoritmers beräkningskomplexitet vet vi att approximativmessage-passing (AMP) kan härledas från LBP i gränsvärdet av storasystem för att lösa MIMO-detektering med oberoende och likafördelade (i.i.d)Gaussiska kanaler. Vi visar sedan hur AMP med dämpning (DAMP) kan vararobust med låg- till mellan-korrelerade kanaler.Avslutningsvis föreslår vi en iterativ djup neuralt nätverk algoritm medlåg beräkningskomplexitet (Pseudo-MMNet) för att lösa MIMO-detektering ikanaler med hög korrelation på bekostnad av online-träning för varje realiseringav kanalen. Pseudo-MMNet är baserad på MMnet som presenteras i [23](istället för AMP) och minskar signifikant online-träningskomplexiteten somgör MMNet orealistisk att använda. Alla föreslagna algoritmer är empirisktutvärderade för stora MIMO-system och högre ordningar av modulation.
Gaur, Yamini. "Exploring Per-Input Filter Selection and Approximation Techniques for Deep Neural Networks". Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/90404.
Testo completoMaster of Science
Deep neural networks, just like the human brain can learn important information about the data provided to them and can classify a new input based on the labels corresponding to the provided dataset. Deep learning technology is heavily employed in devices using computer vision, image and video processing and voice detection. The computational overhead incurred in the classification process of DNNs prohibits their use in smaller devices. This research aims to improve network efficiency in deep learning by replacing 32 bit weights in neural networks with less precision weights in an input-dependent manner. Trained neural networks are numerically robust. Different layers develop tolerance to minor variations in network parameters. Therefore, differences induced by low-precision calculations fall well within tolerance limit of the network. However, for aggressive approximation techniques like truncating to 3 and 2 bits, inference accuracy drops severely. We propose a dynamic technique that during run-time, identifies the approximated filters resulting in low inference accuracy for a given input and replaces those filters with the original filters to achieve high inference accuracy. The proposed technique has been tested for image classification on Convolutional Neural Networks. The datasets used are MNIST and CIFAR-10. The Convolutional Neural Networks used are 4-layered CNN, LeNet-5 and AlexNet.
Dumlupinar, Taha. "Approximate Analysis And Condition Assesment Of Reinforced Concrete T-beam Bridges Using Artificial Neural Networks". Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609732/index.pdf.
Testo completos T-beam bridge population - based on field test data. Manual calibration of these models are extremely time consuming and laborious. Therefore, a neural network- based method is developed for easy and practical calibration of these models. The ANN model is trained using some training data that are obtained from finite-element analyses and that contain modal and displacement parameters as inputs and structural parameters as outputs. After the training is completed, fieldmeasured data set is fed into the trained ANN model. Then, FE model is updated with the predicted structural parameters from the ANN model. In the final part, Neural Networks (NNs) are used to model the bridge ratings of RC T-beam bridges based on bridge parameters. Bridge load ratings are calculated more accurately by taking into account the actual geometry and detailing of the T-beam bridges. Then, ANN solution is developed to easily compute bridge load ratings.
Tornstad, Magnus. "Evaluating the Practicality of Using a Kronecker-Factored Approximate Curvature Matrix in Newton's Method for Optimization in Neural Networks". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275741.
Testo completoAndra ordningens optimeringsmetoder have länge ansetts vara beräkningsmässigt ineffektiva för att lösa optimeringsproblemet inom djup maskininlärning. En alternativ optimiseringsstrategi som använder en Kronecker-faktoriserad approximativ Hessian (KFAC) i Newtons metod för optimering, har föreslagits i tidigare studier. Detta arbete syftar till att utvärdera huruvida metoden är praktisk att använda i djup maskininlärning. Test körs på abstrakta, binära, klassificeringsproblem, samt ett verkligt regressionsproblem: Boston Housing data. Studien fann att KFAC erbjuder stora besparingar i tidskopmlexitet jämfört med när en mer naiv implementation med Gauss-Newton matrisen används. Vidare visade sig losskonvergensen hos både stokastisk gradient descent (SGD) och KFAC beroende av nätverksarkitektur: KFAC tenderade att konvergera snabbare i djupa nätverk, medan SGD tenderade att konvergera snabbare i grunda nätverk. Studien drar slutsatsen att KFAC kan prestera väl för djup maskininlärning jämfört med en grundläggande variant av SGD. KFAC visade sig dock kunna vara mycket känslig för initialvikter. Detta problem kunde lösas genom att låta de första stegen tas av SGD så att KFAC hamnade på en gynnsam bana.
Hanselmann, Thomas. "Approximate dynamic programming with adaptive critics and the algebraic perceptron as a fast neural network related to support vector machines". University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2003. http://theses.library.uwa.edu.au/adt-WU2004.0005.
Testo completoMalfatti, Guilherme Meneguzzi. "Técnicas de agrupamento de dados para computação aproximativa". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/169096.
Testo completoTwo of the major drivers of increased performance in single-thread applications - increase in operation frequency and exploitation of instruction-level parallelism - have had little advances in the last years due to power constraints. In this context, considering the intrinsic imprecision-tolerance (i.e., outputs may present an acceptable level of noise without compromising the result) of many modern applications, such as image processing and machine learning, approximate computation becomes a promising approach. This technique is based on computing approximate instead of accurate results, which can increase performance and reduce energy consumption at the cost of quality. In the current state of the art, the most common way of exploiting the technique is through neural networks (more specifically, the Multilayer Perceptron model), due to the ability of these structures to learn arbitrary functions and to approximate them. Such networks are usually implemented in a dedicated neural accelerator. However, this implementation requires a large amount of chip area and usually does not offer enough improvements to justify this additional cost. The goal of this work is to propose a new mechanism to address approximate computation, based on approximate reuse of functions and code fragments. This technique automatically groups input and output data by similarity and stores this information in a sofware-controlled memory. Based on these data, the quantized values can be reused through a search to this table, in which the most appropriate output will be selected and, therefore, execution of the original code will be replaced. Applying this technique is effective, achieving an average 97.1% reduction in Energy-Delay-Product (EDP) when compared to neural accelerators.
Libri sul tema "Approximate identity neural networks"
Snail, Mgebwi Lavin. The antecedens [sic] and the emergence of the black consciousness movement in South Africa: Its ideology and organisation. München: Akademischer Verlag, 1993.
Cerca il testo completoButz, Martin V., e Esther F. Kutter. Brain Basics from a Computational Perspective. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780198739692.003.0007.
Testo completoBindemann, Markus, a cura di. Forensic Face Matching. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198837749.001.0001.
Testo completoCapitoli di libri sul tema "Approximate identity neural networks"
Fard, Saeed Panahian, e Zarita Zainuddin. "Toroidal Approximate Identity Neural Networks Are Universal Approximators". In Neural Information Processing, 135–42. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12637-1_17.
Testo completoZainuddin, Zarita, e Saeed Panahian Fard. "Double Approximate Identity Neural Networks Universal Approximation in Real Lebesgue Spaces". In Neural Information Processing, 409–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34475-6_49.
Testo completoPanahian Fard, Saeed, e Zarita Zainuddin. "The Universal Approximation Capabilities of Mellin Approximate Identity Neural Networks". In Advances in Neural Networks – ISNN 2013, 205–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39065-4_26.
Testo completoPanahian Fard, Saeed, e Zarita Zainuddin. "Universal Approximation by Generalized Mellin Approximate Identity Neural Networks". In Proceedings of the 4th International Conference on Computer Engineering and Networks, 187–94. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-11104-9_22.
Testo completoFard, Saeed Panahian, e Zarita Zainuddin. "The Universal Approximation Capability of Double Flexible Approximate Identity Neural Networks". In Lecture Notes in Electrical Engineering, 125–33. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-01766-2_15.
Testo completoPanahian Fard, Saeed, e Zarita Zainuddin. "On the Universal Approximation Capability of Flexible Approximate Identity Neural Networks". In Emerging Technologies for Information Systems, Computing, and Management, 201–7. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-7010-6_23.
Testo completoHanif, Muhammad Abdullah, Muhammad Usama Javed, Rehan Hafiz, Semeen Rehman e Muhammad Shafique. "Hardware–Software Approximations for Deep Neural Networks". In Approximate Circuits, 269–88. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99322-5_13.
Testo completoChoi, Jungwook, e Swagath Venkataramani. "Approximate Computing Techniques for Deep Neural Networks". In Approximate Circuits, 307–29. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99322-5_15.
Testo completoIshibuchi, H., e H. Tanaka. "Approximate Pattern Classification Using Neural Networks". In Fuzzy Logic, 225–36. Dordrecht: Springer Netherlands, 1993. http://dx.doi.org/10.1007/978-94-011-2014-2_22.
Testo completoBai, Xuerui, Jianqiang Yi e Dongbin Zhao. "Approximate Dynamic Programming for Ship Course Control". In Advances in Neural Networks – ISNN 2007, 349–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-72383-7_41.
Testo completoAtti di convegni sul tema "Approximate identity neural networks"
Zainuddin, Zarita, e Saeed Panahian Fard. "Spherical approximate identity neural networks are universal approximators". In 2014 10th International Conference on Natural Computation (ICNC). IEEE, 2014. http://dx.doi.org/10.1109/icnc.2014.6975812.
Testo completoFard Panahian, Saeed, e Zarita Zainuddin. "Universal Approximation Property of Weighted Approximate Identity Neural Networks". In The 5th International Conference on Computer Engineering and Networks. Trieste, Italy: Sissa Medialab, 2015. http://dx.doi.org/10.22323/1.259.0007.
Testo completoPanahian Fard, Saeed, e Zarita Zainuddin. "The Universal Approximation Capabilities of 2pi-Periodic Approximate Identity Neural Networks". In 2013 International Conference on Information Science and Cloud Computing Companion (ISCC-C). IEEE, 2013. http://dx.doi.org/10.1109/iscc-c.2013.147.
Testo completoFard, Saeed Panahian. "Solving Universal Approximation Problem by Hankel Approximate Identity Neural Networks in Function Spaces". In The fourth International Conference on Information Science and Cloud Computing. Trieste, Italy: Sissa Medialab, 2016. http://dx.doi.org/10.22323/1.264.0031.
Testo completoZainuddin, Zarita, e Saeed Panahian Fard. "Approximation of multivariate 2π-periodic functions by multiple 2π-periodic approximate identity neural networks based on the universal approximation theorems". In 2015 11th International Conference on Natural Computation (ICNC). IEEE, 2015. http://dx.doi.org/10.1109/icnc.2015.7377957.
Testo completoAhmadian, M. T., e A. Mobini. "Online Prediction of Plate Deformations Under External Forces Using Neural Networks". In ASME 2006 International Mechanical Engineering Congress and Exposition. ASMEDC, 2006. http://dx.doi.org/10.1115/imece2006-15844.
Testo completoMao, X., V. Joshi, T. P. Miyanawala e Rajeev K. Jaiman. "Data-Driven Computing With Convolutional Neural Networks for Two-Phase Flows: Application to Wave-Structure Interaction". In ASME 2018 37th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/omae2018-78425.
Testo completoLi, Longyuan, Junchi Yan, Xiaokang Yang e Yaohui Jin. "Learning Interpretable Deep State Space Model for Probabilistic Time Series Forecasting". In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/402.
Testo completoSen, Sanchari, Swagath Venkataramani e Anand Raghunathan. "Approximate computing for spiking neural networks". In 2017 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2017. http://dx.doi.org/10.23919/date.2017.7926981.
Testo completoXu, Xiangrui, Yaqin Lee, Yunlong Gao e Cao Yuan. "Adding identity numbers to deep neural networks". In Automatic Target Recognition and Navigation, a cura di Hanyu Hong, Jianguo Liu e Xia Hua. SPIE, 2020. http://dx.doi.org/10.1117/12.2540293.
Testo completo