Academic literature on the topic 'Efficient Neural Networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Efficient Neural Networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Efficient Neural Networks"

1

Guo, Qingbei, Xiao-Jun Wu, Josef Kittler, and Zhiquan Feng. "Differentiable neural architecture learning for efficient neural networks." Pattern Recognition 126 (June 2022): 108448. http://dx.doi.org/10.1016/j.patcog.2021.108448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Shiya, Dong Sam Ha, Fangyang Shen, and Yang Yi. "Efficient neural networks for edge devices." Computers & Electrical Engineering 92 (June 2021): 107121. http://dx.doi.org/10.1016/j.compeleceng.2021.107121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Guoqing, Meng Zhang, Jiaojie Li, Feng Lv, and Guodong Tong. "Efficient densely connected convolutional neural networks." Pattern Recognition 109 (January 2021): 107610. http://dx.doi.org/10.1016/j.patcog.2020.107610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. "Efficient Processing of Deep Neural Networks." Synthesis Lectures on Computer Architecture 15, no. 2 (June 16, 2020): 1–341. http://dx.doi.org/10.2200/s01004ed1v01y202004cac050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zelenina, Larisa I., D. S. Khripunov, Liudmila E. Khaimina, Evgenii S. Khaimin, and Inga M. Zashikhina. "The Problem of Images’ Classification: Neural Networks." Mathematics and Informatics LXIV, no. 3 (June 30, 2021): 289–300. http://dx.doi.org/10.53656/math2021-3-4-the.

Full text
Abstract:
The article discusses the employment of an artificial neural network for the problem of image classification. Based on the compiled dataset, a convolutional neural network model was implemented and trained. An application for images classification was created. The practical value of the developed application allows an efficient use of a smartphone camera’s storage facilities. The article presents a detailed methodology of the application’s development.
APA, Harvard, Vancouver, ISO, and other styles
6

Fahim, Houda, Olivier Sawadogo, Nour Alaa, and Mohammed Guedda. "AN EFFICIENT IDENTIFICATION OF RED BLOOD CELL EQUILIBRIUM SHAPE USING NEURAL NETWORKS." Eurasian Journal of Mathematical and Computer Applications 9, no. 2 (June 2021): 39–56. http://dx.doi.org/10.32523/2306-6172-2021-9-2-39-56.

Full text
Abstract:
This work of applied mathematics with interfaces in bio-physics focuses on the shape identification and numerical modelisation of a single red blood cell shape. The purpose of this work is to provide a quantitative method for interpreting experimental observations of the red blood cell shape under microscopy. In this paper we give a new formulation based on classical theory of geometric shape minimization which assumes that the curvature energy with additional constraints controls the shape of the red blood cell. To minimize this energy under volume and area constraints, we propose a new hybrid algorithm which combines Particle Swarm Optimization (PSO), Gravitational Search (GSA) and Neural Network Algorithm (NNA). The results obtained using this new algorithm agree well with the experimental results given by Evans et al. (8) especially for sphered and biconcave shapes.
APA, Harvard, Vancouver, ISO, and other styles
7

Gao, Yuan, Laurence T. Yang, Dehua Zheng, Jing Yang, and Yaliang Zhao. "Quantized Tensor Neural Network." ACM/IMS Transactions on Data Science 2, no. 4 (November 30, 2021): 1–18. http://dx.doi.org/10.1145/3491255.

Full text
Abstract:
Tensor network as an effective computing framework for efficient processing and analysis of high-dimensional data has been successfully applied in many fields. However, the performance of traditional tensor networks still cannot match the strong fitting ability of neural networks, so some data processing algorithms based on tensor networks cannot achieve the same excellent performance as deep learning models. To further improve the learning ability of tensor network, we propose a quantized tensor neural network in this article (QTNN), which integrates the advantages of neural networks and tensor networks, namely, the powerful learning ability of neural networks and the simplicity of tensor networks. The QTNN model can be further regarded as a generalized multilayer nonlinear tensor network, which can efficiently extract low-dimensional features of the data while maintaining the original structure information. In addition, to more effectively represent the local information of data, we introduce multiple convolution layers in QTNN to extract the local features. We also develop a high-order back-propagation algorithm for training the parameters of QTNN. We conducted classification experiments on multiple representative datasets to further evaluate the performance of proposed models, and the experimental results show that QTNN is simpler and more efficient while compared to the classic deep learning models.
APA, Harvard, Vancouver, ISO, and other styles
8

Marton, Sascha, Stefan Lüdtke, and Christian Bartelt. "Explanations for Neural Networks by Neural Networks." Applied Sciences 12, no. 3 (January 18, 2022): 980. http://dx.doi.org/10.3390/app12030980.

Full text
Abstract:
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate model on a sample-basis, which can be very time-consuming. Therefore, these approaches are not applicable in scenarios where timely or frequent explanations are required. In this paper, we introduce a real-time approach for generating a symbolic representation of the function learned by a neural network. Our idea is to generate explanations via another neural network (called the Interpretation Network, or I-Net), which maps network parameters to a symbolic representation of the network function. We show that the training of an I-Net for a family of functions can be performed up-front and subsequent generation of an explanation only requires querying the I-Net once, which is computationally very efficient and does not require training data. We empirically evaluate our approach for the case of low-order polynomials as explanations, and show that it achieves competitive results for various data and function complexities. To the best of our knowledge, this is the first approach that attempts to learn mapping from neural networks to symbolic representations.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Huan, Pengchuan Zhang, and Cho-Jui Hsieh. "RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5757–64. http://dx.doi.org/10.1609/aaai.v33i01.33015757.

Full text
Abstract:
The Jacobian matrix (or the gradient for single-output networks) is directly related to many important properties of neural networks, such as the function landscape, stationary points, (local) Lipschitz constants and robustness to adversarial attacks. In this paper, we propose a recursive algorithm, RecurJac, to compute both upper and lower bounds for each element in the Jacobian matrix of a neural network with respect to network’s input, and the network can contain a wide range of activation functions. As a byproduct, we can efficiently obtain a (local) Lipschitz constant, which plays a crucial role in neural network robustness verification, as well as the training stability of GANs. Experiments show that (local) Lipschitz constants produced by our method is of better quality than previous approaches, thus providing better robustness verification results. Our algorithm has polynomial time complexity, and its computation time is reasonable even for relatively large networks. Additionally, we use our bounds of Jacobian matrix to characterize the landscape of the neural network, for example, to determine whether there exist stationary points in a local neighborhood.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Wenqiang, Jiemin Fang, Xinggang Wang, and Wenyu Liu. "EfficientPose: Efficient human pose estimation with neural architecture search." Computational Visual Media 7, no. 3 (April 7, 2021): 335–47. http://dx.doi.org/10.1007/s41095-021-0214-z.

Full text
Abstract:
AbstractHuman pose estimation from image and video is a key task in many multimedia applications. Previous methods achieve great performance but rarely take efficiency into consideration, which makes it difficult to implement the networks on lightweight devices. Nowadays, real-time multimedia applications call for more efficient models for better interaction. Moreover, most deep neural networks for pose estimation directly reuse networks designed for image classification as the backbone, which are not optimized for the pose estimation task. In this paper, we propose an efficient framework for human pose estimation with two parts, an efficient backbone and an efficient head. By implementing a differentiable neural architecture search method, we customize the backbone network design for pose estimation, and reduce computational cost with negligible accuracy degradation. For the efficient head, we slim the transposed convolutions and propose a spatial information correction module to promote the performance of the final prediction. In experiments, we evaluate our networks on the MPII and COCO datasets. Our smallest model requires only 0.65 GFLOPs with 88.1% PCKh@0.5 on MPII and our large model needs only 2 GFLOPs while its accuracy is competitive with the state-of-the-art large model, HRNet, which takes 9.5 GFLOPs.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Efficient Neural Networks"

1

Silfa, Franyell. "Energy-efficient architectures for recurrent neural networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671448.

Full text
Abstract:
Deep Learning algorithms have been remarkably successful in applications such as Automatic Speech Recognition and Machine Translation. Thus, these kinds of applications are ubiquitous in our lives and are found in a plethora of devices. These algorithms are composed of Deep Neural Networks (DNNs), such as Convolutional Neural Networks and Recurrent Neural Networks (RNNs), which have a large number of parameters and require a large amount of computations. Hence, the evaluation of DNNs is challenging due to their large memory and power requirements. RNNs are employed to solve sequence to sequence problems such as Machine Translation. They contain data dependencies among the executions of time-steps hence the amount of parallelism is severely limited. Thus, evaluating them in an energy-efficient manner is more challenging than evaluating other DNN algorithms. This thesis studies applications using RNNs to improve their energy efficiency on specialized architectures. Specifically, we propose novel energy-saving techniques and highly efficient architectures tailored to the evaluation of RNNs. We focus on the most successful RNN topologies which are the Long Short Term memory and the Gated Recurrent Unit. First, we characterize a set of RNNs running on a modern SoC. We identify that accessing the memory to fetch the model weights is the main source of energy consumption. Thus, we propose E-PUR: an energy-efficient processing unit for RNN inference. E-PUR achieves 6.8x speedup and improves energy consumption by 88x compared to the SoC. These benefits are obtained by improving the temporal locality of the model weights. In E-PUR, fetching the parameters is the main source of energy consumption. Thus, we strive to reduce memory accesses and propose a scheme to reuse previous computations. Our observation is that when evaluating the input sequences of an RNN model, the output of a given neuron tends to change lightly between consecutive evaluations.Thus, we develop a scheme that caches the neurons' outputs and reuses them whenever it detects that the change between the current and previously computed output value for a given neuron is small avoiding to fetch the weights. In order to decide when to reuse a previous value we employ a Binary Neural Network (BNN) as a predictor of reusability. The low-cost BNN can be employed in this context since its output is highly correlated to the output of RNNs. We show that our proposal avoids more than 24.2% of computations. Hence, on average, energy consumption is reduced by 18.5% for a speedup of 1.35x. RNN models’ memory footprint is usually reduced by using low precision for evaluation and storage. In this case, the minimum precision used is identified offline and it is set such that the model maintains its accuracy. This method utilizes the same precision to compute all time-steps.Yet, we observe that some time-steps can be evaluated with a lower precision while preserving the accuracy. Thus, we propose a technique that dynamically selects the precision used to compute each time-step. A challenge of our proposal is choosing a lower bit-width. We address this issue by recognizing that information from a previous evaluation can be employed to determine the precision required in the current time-step. Our scheme evaluates 57% of the computations on a bit-width lower than the fixed precision employed by static methods. We implement it on E-PUR and it provides 1.46x speedup and 19.2% energy savings on average.
Los algoritmos de aprendizaje profundo han tenido un éxito notable en aplicaciones como el reconocimiento automático de voz y la traducción automática. Por ende, estas aplicaciones son omnipresentes en nuestras vidas y se encuentran en una gran cantidad de dispositivos. Estos algoritmos se componen de Redes Neuronales Profundas (DNN), tales como las Redes Neuronales Convolucionales y Redes Neuronales Recurrentes (RNN), las cuales tienen un gran número de parámetros y cálculos. Por esto implementar DNNs en dispositivos móviles y servidores es un reto debido a los requisitos de memoria y energía. Las RNN se usan para resolver problemas de secuencia a secuencia tales como traducción automática. Estas contienen dependencias de datos entre las ejecuciones de cada time-step, por ello la cantidad de paralelismo es limitado. Por eso la evaluación de RNNs de forma energéticamente eficiente es un reto. En esta tesis se estudian RNNs para mejorar su eficiencia energética en arquitecturas especializadas. Para esto, proponemos técnicas de ahorro energético y arquitecturas de alta eficiencia adaptadas a la evaluación de RNN. Primero, caracterizamos un conjunto de RNN ejecutándose en un SoC. Luego identificamos que acceder a la memoria para leer los pesos es la mayor fuente de consumo energético el cual llega hasta un 80%. Por ende, creamos E-PUR: una unidad de procesamiento para RNN. E-PUR logra una aceleración de 6.8x y mejora el consumo energético en 88x en comparación con el SoC. Esas mejoras se deben a la maximización de la ubicación temporal de los pesos. En E-PUR, la lectura de los pesos representa el mayor consumo energético. Por ende, nos enfocamos en reducir los accesos a la memoria y creamos un esquema que reutiliza resultados calculados previamente. La observación es que al evaluar las secuencias de entrada de un RNN, la salida de una neurona dada tiende a cambiar ligeramente entre evaluaciones consecutivas, por lo que ideamos un esquema que almacena en caché las salidas de las neuronas y las reutiliza cada vez que detecta un cambio pequeño entre el valor de salida actual y el valor previo, lo que evita leer los pesos. Para decidir cuándo usar un cálculo anterior utilizamos una Red Neuronal Binaria (BNN) como predictor de reutilización, dado que su salida está altamente correlacionada con la salida de la RNN. Esta propuesta evita más del 24.2% de los cálculos y reduce el consumo energético promedio en 18.5%. El tamaño de la memoria de los modelos RNN suele reducirse utilizando baja precisión para la evaluación y el almacenamiento de los pesos. En este caso, la precisión mínima utilizada se identifica de forma estática y se establece de manera que la RNN mantenga su exactitud. Normalmente, este método utiliza la misma precisión para todo los cálculos. Sin embargo, observamos que algunos cálculos se pueden evaluar con una precisión menor sin afectar la exactitud. Por eso, ideamos una técnica que selecciona dinámicamente la precisión utilizada para calcular cada time-step. Un reto de esta propuesta es como elegir una precisión menor. Abordamos este problema reconociendo que el resultado de una evaluación previa se puede emplear para determinar la precisión requerida en el time-step actual. Nuestro esquema evalúa el 57% de los cálculos con una precisión menor que la precisión fija empleada por los métodos estáticos. Por último, la evaluación en E-PUR muestra una aceleración de 1.46x con un ahorro de energía promedio de 19.2%
APA, Harvard, Vancouver, ISO, and other styles
2

Golea, Mostefa. "On efficient learning algorithms for neural networks." Thesis, University of Ottawa (Canada), 1993. http://hdl.handle.net/10393/6508.

Full text
Abstract:
Inductive Inference Learning can be described in terms of finding a good approximation to some unknown classification rule f, based on a pre-classified set of training examples $\langle$x,f(x)$\rangle.$ One particular class of learning systems that has attracted much attention recently is the class of neural networks. But despite the excitement generated by neural networks, learning in these systems has proven to be a difficult task. In this thesis, we investigate different ways and means to overcome the difficulty of training feedforward neural networks. Our goal is to come up with efficient learning algorithms for new classes (or architectures) of neural nets. In the first approach, we relax the constraint of fixed architecture adopted by most neural learning algorithms. We describe two constructive learning algorithms for two-layer and tree-like networks. In the second approach, we adopt the "probably approximately correct" (PAC) learning model and we look for positive learnability results by restricting the distribution generating the training examples, the connectivity of the networks, and/or the weight values. This enables us to identify new classes of neural networks that are efficiently learnable in the chosen setting. In the third and final approach, we look at the problem of learning in neural networks from the average case point of view. In particular, we investigate the average case behavior of the well known clipped Hebb rule when learning different neural networks with binary weights. The arguments given for the "efficient learnability" range from extensive simulations to rigorous mathematical proofs.
APA, Harvard, Vancouver, ISO, and other styles
3

Islam, Taj-ul. "Channel routing : efficient solutions using neural networks /." Online version of thesis, 1993. http://hdl.handle.net/1850/11154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Wei. "Efficient neural networks for prediction of turbulent flow." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/16939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Billings, Rachel Mae. "On Efficient Computer Vision Applications for Neural Networks." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/102957.

Full text
Abstract:
Since approximately the dawn of the new millennium, neural networks and other machine learning algorithms have become increasingly capable of adeptly performing difficult, dull, and dangerous work conventionally carried out by humans in times of old. As these algorithms become steadily more commonplace in everyday consumer and industry applications, the consideration of how they may be implemented on constrained hardware systems such as smartphones and Internet-of-Things (IoT) peripheral devices in a time- and power- efficient manner while also understanding the scenarios in which they fail is of increasing importance. This work investigates implementations of convolutional neural networks specifically in the context of image inference tasks. Three areas are analyzed: (1) a time- and power-efficient face recognition framework, (2) the development of a COVID-19-related mask classification system suitable for deployment on low-cost, low-power devices, and (3) an investigation into the implementation of spiking neural networks on mobile hardware and their conversion from traditional neural network architectures.
Master of Science
The subject of machine learning and its associated jargon have become ubiquitous in the past decade as industries seek to develop automated tools and applications and researchers continue to develop new methods for artificial intelligence and improve upon existing ones. Neural networks are a type of machine learning algorithm that can make predictions in complex situations based on input data with human-like (or better) accuracy. Real-time, low-power, and low-cost systems using these algorithms are increasingly used in consumer and industry applications, often improving the efficiency of completing mundane and hazardous tasks traditionally performed by humans. The focus of this work is (1) to explore when and why neural networks may make incorrect decisions in the domain of image-based prediction tasks, (2) the demonstration of a low-power, low-cost machine learning use case using a mask recognition system intended to be suitable for deployment in support of COVID-19-related mask regulations, and (3) the investigation of how neural networks may be implemented on resource-limited technology in an efficient manner using an emerging form of computing.
APA, Harvard, Vancouver, ISO, and other styles
6

Bozorgmehr, Pouya. "An efficient online feature extraction algorithm for neural networks." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p1470604.

Full text
Abstract:
Thesis (M.S.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed January 13, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 61-63).
APA, Harvard, Vancouver, ISO, and other styles
7

Al-Hindi, Khalid A. "Flexible basis function neural networks for efficient analog implementations /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3074367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ekman, Carl. "Traffic Sign Classification Using Computationally Efficient Convolutional Neural Networks." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157453.

Full text
Abstract:
Traffic sign recognition is an important problem for autonomous cars and driver assistance systems. With recent developments in the field of machine learning, high performance can be achieved, but typically at a large computational cost. This thesis aims to investigate the relation between classification accuracy and computational complexity for the visual recognition problem of classifying traffic signs. In particular, the benefits of partitioning the classification problem into smaller sub-problems using prior knowledge in the form of shape or current region are investigated. In the experiments, the convolutional neural network (CNN) architecture MobileNetV2 is used, as it is specifically designed to be computationally efficient. To incorporate prior knowledge, separate CNNs are used for the different subsets generated when partitioning the dataset based on region or shape. The separate CNNs are trained from scratch or initialized by pre-training on the full dataset. The results support the intuitive idea that performance initially increases with network size and indicate a network size where the improvement stops. Including shape information using the two investigated methods does not result in a significant improvement. Including region information using pretrained separate classifiers results in a small improvement for small complexities, for one of the regions in the experiments. In the end, none of the investigated methods of including prior knowledge are considered to yield an improvement large enough to justify the added implementational complexity. However, some other methods are suggested, which would be interesting to study in future work.
APA, Harvard, Vancouver, ISO, and other styles
9

Adamu, Abdullahi S. "An empirical study towards efficient learning in artificial neural networks by neuronal diversity." Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33799/.

Full text
Abstract:
Artificial Neural Networks (ANN) are biologically inspired algorithms, and it is natural that it continues to inspire research in artificial neural networks. From the recent breakthrough of deep learning to the wake-sleep training routine, all have a common source of drawing inspiration: biology. The transfer functions of artificial neural networks play the important role of forming decision boundaries necessary for learning. However, there has been relatively little research on transfer function optimization compared to other aspects of neural network optimization. In this work, neuronal diversity - a property found in biological neural networks- is explored as a potentially promising method of transfer function optimization. This work shows how neural diversity can improve generalization in the context of literature from the bias-variance decomposition and meta-learning. It then demonstrates that neural diversity - represented in the form of transfer function diversity- can exhibit diverse and accurate computational strategies that can be used as ensembles with competitive results without supplementing it with other diversity maintenance schemes that tend to be computationally expensive. This work also presents neural network meta-features described as problem signatures sampled from models with diverse transfer functions for problem characterization. This was shown to meet the criteria of basic properties desired for any meta-feature, i.e. consistency for a problem and discriminatory for different problems. Furthermore, these meta-features were also used to study the underlying computational strategies adopted by the neural network models, which lead to the discovery of the strong discriminatory property of the evolved transfer function. The culmination of this study is the co-evolution of neurally diverse neurons with their weights and topology for efficient learning. It is shown to achieve significant generalization ability as demonstrated by its average MSE of 0.30 on 22 different benchmarks with minimal resources (i.e. two hidden units). Interestingly, these are the properties associated with neural diversity. Thus, showing the properties of efficiency and increased computational capacity could be replicated with transfer function diversity in artificial neural networks.
APA, Harvard, Vancouver, ISO, and other styles
10

Etchells, Terence Anthony. "Rule extraction from neural networks : a practical and efficient approach." Thesis, Liverpool John Moores University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.402847.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Efficient Neural Networks"

1

Approximation methods for efficient learning of Bayesian networks. Amsterdam: IOS Press, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Omohundro, Stephen M. Efficient algorithms with neural network behavior. Urbana, Il (1304 W. Springfield Ave., Urbana 61801): Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Costa, Álvaro. Evaluating public transport efficiency with neural network models. Loughborough: Loughborough University, Department of Economics, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Markellos, Raphael N. Robust estimation of nonlinear production frontiers and efficiency: A neural network approach. Loughborough: Loughborough University, Department of Economics, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. Efficient Processing of Deep Neural Networks. Morgan & Claypool Publishers, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. Efficient Processing of Deep Neural Networks. Springer International Publishing AG, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. Efficient Processing of Deep Neural Networks. Morgan & Claypool Publishers, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. Efficient Processing of Deep Neural Networks. Morgan & Claypool Publishers, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ghotra, Manpreet Singh, and Rajdeep Dua. Neural Network Programming with TensorFlow: Unleash the power of TensorFlow to train efficient neural networks. Packt Publishing, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Takikawa, Masami. Representations and algorithms for efficient inference in Bayesian networks. 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Efficient Neural Networks"

1

Awad, Mariette, and Rahul Khanna. "Deep Neural Networks." In Efficient Learning Machines, 127–47. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4302-5990-9_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Karayiannis, N. B., and A. N. Venetsanopoulos. "ELEANNE: Efficient LEarning Algorithms for Neural NEtworks." In Artificial Neural Networks, 87–139. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4757-4547-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ferrari, Enrico, and Marco Muselli. "Efficient Constructive Techniques for Training Switching Neural Networks." In Constructive Neural Networks, 25–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04512-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bellot, Pau, and Patrick E. Meyer. "Efficient Combination of Pairwise Feature Networks." In Neural Connectomics Challenge, 85–93. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-53070-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. "Overview of Deep Neural Networks." In Efficient Processing of Deep Neural Networks, 17–39. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-031-01766-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Hyunjung, Harksoo Kim, and Jungyun Seo. "Efficient Domain Action Classification Using Neural Networks." In Neural Information Processing, 150–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11893257_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bahroun, Yanis, Eugénie Hunsicker, and Andrea Soltoggio. "Neural Networks for Efficient Nonlinear Online Clustering." In Neural Information Processing, 316–24. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70087-8_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rajalakshmi, Ratnavel, Abhinav Basil Shinow, Aswin Murali, Kashinadh S. Nair, and J. Bhuvana. "An Efficient Convolutional Neural Network with Image Augmentation for Cassava Leaf Disease Detection." In Recurrent Neural Networks, 289–305. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Van Belle, Vanya, Kristiaan Pelckmans, Johan A. K. Suykens, and Sabine Van Huffel. "MINLIP: Efficient Learning of Transformation Models." In Artificial Neural Networks – ICANN 2009, 60–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04274-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Alamro, Hayam, Mai Alzamel, Costas S. Iliopoulos, Solon P. Pissis, Steven Watts, and Wing-Kin Sung. "Efficient Identification of k-Closed Strings." In Engineering Applications of Neural Networks, 583–95. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-65172-9_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Efficient Neural Networks"

1

Laube, Kevin A., and Andreas Zell. "ShuffleNASNets: Efficient CNN models through modified Efficient Neural Architecture Search." In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Manjunathswamy, B. E., J. Thriveni, K. R. Venugopal, and L. M. Patnaik. "Efficient iris retrieval using neural networks." In 2012 Nirma University International Conference on Engineering (NUiCONE). IEEE, 2012. http://dx.doi.org/10.1109/nuicone.2012.6493176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yan, Bencheng, Chaokun Wang, Gaoyang Guo, and Yunkai Lou. "TinyGNN: Learning Efficient Graph Neural Networks." In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394486.3403236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Miltrup, Matthias, and Georg Schnitger. "Neural networks and efficient associative memory." In the eleventh annual conference. New York, New York, USA: ACM Press, 1998. http://dx.doi.org/10.1145/279943.279991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kaylani, A., M. Georgiopoulos, M. Mollaghasemi, and G. C. Anagnostopoulos. "Efficient evolution of ART neural networks." In 2008 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2008. http://dx.doi.org/10.1109/cec.2008.4631265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nagarajan, Amrit, Jacob R. Stevens, and Anand Raghunathan. "Efficient ensembles of graph neural networks." In DAC '22: 59th ACM/IEEE Design Automation Conference. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3489517.3530416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Koiran, Pascal. "Efficient learning of continuous neural networks." In the seventh annual conference. New York, New York, USA: ACM Press, 1994. http://dx.doi.org/10.1145/180139.181177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kopuklu, Okan, Neslihan Kose, Ahmet Gunduz, and Gerhard Rigoll. "Resource Efficient 3D Convolutional Neural Networks." In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. http://dx.doi.org/10.1109/iccvw.2019.00240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Young, Steven R., Pravallika Devineni, Maryam Parsa, J. Travis Johnston, Bill Kay, Robert M. Patton, Catherine D. Schuman, Derek C. Rose, and Thomas E. Potok. "Evolving Energy Efficient Convolutional Neural Networks." In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9006239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Minsik, Cheonjun Park, Sungjun Kim, Taeyoung Hong, and Won Woo Ro. "Efficient Dilated-Winograd Convolutional Neural Networks." In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8803277.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Efficient Neural Networks"

1

Semerikov, Serhiy, Illia Teplytskyi, Yuliia Yechkalo, Oksana Markova, Vladimir Soloviev, and Arnold Kiv. Computer Simulation of Neural Networks Using Spreadsheets: Dr. Anderson, Welcome Back. [б. в.], June 2019. http://dx.doi.org/10.31812/123456789/3178.

Full text
Abstract:
The authors of the given article continue the series presented by the 2018 paper “Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot”. This time, they consider mathematical informatics as the basis of higher engineering education fundamentalization. Mathematical informatics deals with smart simulation, information security, long-term data storage and big data management, artificial intelligence systems, etc. The authors suggest studying basic principles of mathematical informatics by applying cloud-oriented means of various levels including those traditionally considered supplementary – spreadsheets. The article considers ways of building neural network models in cloud-oriented spreadsheets, Google Sheets. The model is based on the problem of classifying multi-dimensional data provided in “The Use of Multiple Measurements in Taxonomic Problems” by R. A. Fisher. Edgar Anderson’s role in collecting and preparing the data in the 1920s-1930s is discussed as well as some peculiarities of data selection. There are presented data on the method of multi-dimensional data presentation in the form of an ideograph developed by Anderson and considered one of the first efficient ways of data visualization.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Felix, Nick Alonso, and Corinne Teeter. Combining Spike Time Dependent Plasticity (STDP) and Backpropagation (BP) for Robust and Data Efficient Spiking Neural Networks (SNN). Office of Scientific and Technical Information (OSTI), December 2022. http://dx.doi.org/10.2172/1902866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang, and Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, December 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.

Full text
Abstract:
We present Any-Precision Deep Neural Networks (Any- Precision DNNs), which are trained with a new method that empowers learned DNNs to be flexible in any numerical precision during inference. The same model in runtime can be flexibly and directly set to different bit-width, by trun- cating the least significant bits, to support dynamic speed and accuracy trade-off. When all layers are set to low- bits, we show that the model achieved accuracy compara- ble to dedicated models trained at the same precision. This nice property facilitates flexible deployment of deep learn- ing models in real-world applications, where in practice trade-offs between model accuracy and runtime efficiency are often sought. Previous literature presents solutions to train models at each individual fixed efficiency/accuracy trade-off point. But how to produce a model flexible in runtime precision is largely unexplored. When the demand of efficiency/accuracy trade-off varies from time to time or even dynamically changes in runtime, it is infeasible to re-train models accordingly, and the storage budget may forbid keeping multiple models. Our proposed framework achieves this flexibility without performance degradation. More importantly, we demonstrate that this achievement is agnostic to model architectures. We experimentally validated our method with different deep network backbones (AlexNet-small, Resnet-20, Resnet-50) on different datasets (SVHN, Cifar-10, ImageNet) and observed consistent results.
APA, Harvard, Vancouver, ISO, and other styles
4

Yaroshchuk, Svitlana O., Nonna N. Shapovalova, Andrii M. Striuk, Olena H. Rybalchenko, Iryna O. Dotsenko, and Svitlana V. Bilashenko. Credit scoring model for microfinance organizations. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3683.

Full text
Abstract:
The purpose of the work is the development and application of models for scoring assessment of microfinance institution borrowers. This model allows to increase the efficiency of work in the field of credit. The object of research is lending. The subject of the study is a direct scoring model for improving the quality of lending using machine learning methods. The objective of the study: to determine the criteria for choosing a solvent borrower, to develop a model for an early assessment, to create software based on neural networks to determine the probability of a loan default risk. Used research methods such as analysis of the literature on banking scoring; artificial intelligence methods for scoring; modeling of scoring estimation algorithm using neural networks, empirical method for determining the optimal parameters of the training model; method of object-oriented design and programming. The result of the work is a neural network scoring model with high accuracy of calculations, an implemented system of automatic customer lending.
APA, Harvard, Vancouver, ISO, and other styles
5

Gage, Harmon J. Using Upper Layer Weights to Efficiently Construct and Train Feedforward Neural Networks Executing Backpropagation. Fort Belvoir, VA: Defense Technical Information Center, March 2011. http://dx.doi.org/10.21236/ada545618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

DeStefano, Zachary Louis. SNNzkSNARK An Efficient Design and Implementation of a Secure Neural Network Verification System Using zkSNARKs. Office of Scientific and Technical Information (OSTI), January 2020. http://dx.doi.org/10.2172/1583147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Miles, Gaines E., Yael Edan, F. Tom Turpin, Avshalom Grinstein, Thomas N. Jordan, Amots Hetzroni, Stephen C. Weller, Marvin M. Schreiber, and Okan K. Ersoy. Expert Sensor for Site Specification Application of Agricultural Chemicals. United States Department of Agriculture, August 1995. http://dx.doi.org/10.32747/1995.7570567.bard.

Full text
Abstract:
In this work multispectral reflectance images are used in conjunction with a neural network classifier for the purpose of detecting and classifying weeds under real field conditions. Multispectral reflectance images which contained different combinations of weeds and crops were taken under actual field conditions. This multispectral reflectance information was used to develop algorithms that could segment the plants from the background as well as classify them into weeds or crops. In order to segment the plants from the background the multispectrial reflectance of plants and background were studied and a relationship was derived. It was found that using a ratio of two wavelenght reflectance images (750nm and 670nm) it was possible to segment the plants from the background. Once ths was accomplished it was then possible to classify the segmented images into weed or crop by use of the neural network. The neural network developed for this work is a modification of the standard learning vector quantization algorithm. This neural network was modified by replacing the time-varying adaptation gain with a constant adaptation gain and a binary reinforcement function. This improved accuracy and training time as well as introducing several new properties such as hill climbing and momentum addition. The network was trained and tested with different wavelength combinations in order to find the best results. Finally, the results of the classifier were evaluated using a pixel based method and a block based method. In the pixel based method every single pixel is evaluated to test whether it was classified correctly or not and the best weed classification results were 81% and its associated crop classification accuracy is 57%. In the block based classification method, the image was divided into blocks and each block was evaluated to determine whether they contained weeds or not. Different block sizes and thesholds were tested. The best results for this method were 97% for a block size of 8 inches and a pixel threshold of 60. A simulation model was developed to 1) quantify the effectiveness of a site-specific sprayer, 2) evaluate influence of diffeent design parameters on efficiency of the site-specific sprayer. In each iteration of this model, infected areas (weed patches) in the field were randomly generated and the amount of required herbicides for spraying these areas were calculated. The effectiveness of the sprayer was estimated for different stain sizes, nozzle types (conic and flat), nozzle sizes and stain detection levels of the identification system. Simulation results indicated that the flat nozzle is much more effective as compared to the conic nozzle and its relative efficiency is greater for small nozzle sizes. By using a site-specific sprayer, the average ratio between the spraying areas and the stain areas is about 1.1 to 1.8 which can save up to 92% of herbicides, especially when the proportion of the stain areas is small.
APA, Harvard, Vancouver, ISO, and other styles
8

Venedicto, Melissa, and Cheng-Yu Lai. Facilitated Release of Doxorubicin from Biodegradable Mesoporous Silica Nanoparticles. Florida International University, October 2021. http://dx.doi.org/10.25148/mmeurs.009774.

Full text
Abstract:
Cervical cancer is one of the most common causes of cancer death for women in the United States. The current treatment with chemotherapy drugs has significant side effects and may cause harm to healthy cells rather than cancer cells. In order to combat the potential side effects, nanoparticles composed of mesoporous silica were created to house the chemotherapy drug doxorubicin (DOX). The silica network contains the drug, and a pH study was conducted to determine the conditions for the nanoparticle to disperse the drug. The introduction of disulfide bonds within the nanoparticle created a framework to efficiently release 97% of DOX in acidic environments and 40% release in neutral environments. The denotation of acidic versus neutral environments was important as cancer cells are typically acidic. The chemistry was proved with the incubation of the loaded nanoparticle into HeLa cells for a cytotoxicity report and confocal imaging. The use of the framework for the anticancer drug was shown to be effective for the killing of cancerous cells.
APA, Harvard, Vancouver, ISO, and other styles
9

Downard, Alicia, Stephen Semmens, and Bryant Robbins. Automated characterization of ridge-swale patterns along the Mississippi River. Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40439.

Full text
Abstract:
The orientation of constructed levee embankments relative to alluvial swales is a useful measure for identifying regions susceptible to backward erosion piping (BEP). This research was conducted to create an automated, efficient process to classify patterns and orientations of swales within the Lower Mississippi Valley (LMV) to support levee risk assessments. Two machine learning algorithms are used to train the classification models: a convolutional neural network and a U-net. The resulting workflow can identify linear topographic features but is unable to reliably differentiate swales from other features, such as the levee structure and riverbanks. Further tuning of training data or manual identification of regions of interest could yield significantly better results. The workflow also provides an orientation to each linear feature to support subsequent analyses of position relative to levee alignments. While the individual models fall short of immediate applicability, the procedure provides a feasible, automated scheme to assist in swale classification and characterization within mature alluvial valley systems similar to LMV.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography