Journal articles on the topic 'Efficient Neural Networks'

To see the other types of publications on this topic, follow the link: Efficient Neural Networks.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Efficient Neural Networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Guo, Qingbei, Xiao-Jun Wu, Josef Kittler, and Zhiquan Feng. "Differentiable neural architecture learning for efficient neural networks." Pattern Recognition 126 (June 2022): 108448. http://dx.doi.org/10.1016/j.patcog.2021.108448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Shiya, Dong Sam Ha, Fangyang Shen, and Yang Yi. "Efficient neural networks for edge devices." Computers & Electrical Engineering 92 (June 2021): 107121. http://dx.doi.org/10.1016/j.compeleceng.2021.107121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Guoqing, Meng Zhang, Jiaojie Li, Feng Lv, and Guodong Tong. "Efficient densely connected convolutional neural networks." Pattern Recognition 109 (January 2021): 107610. http://dx.doi.org/10.1016/j.patcog.2020.107610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. "Efficient Processing of Deep Neural Networks." Synthesis Lectures on Computer Architecture 15, no. 2 (June 16, 2020): 1–341. http://dx.doi.org/10.2200/s01004ed1v01y202004cac050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zelenina, Larisa I., D. S. Khripunov, Liudmila E. Khaimina, Evgenii S. Khaimin, and Inga M. Zashikhina. "The Problem of Images’ Classification: Neural Networks." Mathematics and Informatics LXIV, no. 3 (June 30, 2021): 289–300. http://dx.doi.org/10.53656/math2021-3-4-the.

Full text
Abstract:
The article discusses the employment of an artificial neural network for the problem of image classification. Based on the compiled dataset, a convolutional neural network model was implemented and trained. An application for images classification was created. The practical value of the developed application allows an efficient use of a smartphone camera’s storage facilities. The article presents a detailed methodology of the application’s development.
APA, Harvard, Vancouver, ISO, and other styles
6

Fahim, Houda, Olivier Sawadogo, Nour Alaa, and Mohammed Guedda. "AN EFFICIENT IDENTIFICATION OF RED BLOOD CELL EQUILIBRIUM SHAPE USING NEURAL NETWORKS." Eurasian Journal of Mathematical and Computer Applications 9, no. 2 (June 2021): 39–56. http://dx.doi.org/10.32523/2306-6172-2021-9-2-39-56.

Full text
Abstract:
This work of applied mathematics with interfaces in bio-physics focuses on the shape identification and numerical modelisation of a single red blood cell shape. The purpose of this work is to provide a quantitative method for interpreting experimental observations of the red blood cell shape under microscopy. In this paper we give a new formulation based on classical theory of geometric shape minimization which assumes that the curvature energy with additional constraints controls the shape of the red blood cell. To minimize this energy under volume and area constraints, we propose a new hybrid algorithm which combines Particle Swarm Optimization (PSO), Gravitational Search (GSA) and Neural Network Algorithm (NNA). The results obtained using this new algorithm agree well with the experimental results given by Evans et al. (8) especially for sphered and biconcave shapes.
APA, Harvard, Vancouver, ISO, and other styles
7

Gao, Yuan, Laurence T. Yang, Dehua Zheng, Jing Yang, and Yaliang Zhao. "Quantized Tensor Neural Network." ACM/IMS Transactions on Data Science 2, no. 4 (November 30, 2021): 1–18. http://dx.doi.org/10.1145/3491255.

Full text
Abstract:
Tensor network as an effective computing framework for efficient processing and analysis of high-dimensional data has been successfully applied in many fields. However, the performance of traditional tensor networks still cannot match the strong fitting ability of neural networks, so some data processing algorithms based on tensor networks cannot achieve the same excellent performance as deep learning models. To further improve the learning ability of tensor network, we propose a quantized tensor neural network in this article (QTNN), which integrates the advantages of neural networks and tensor networks, namely, the powerful learning ability of neural networks and the simplicity of tensor networks. The QTNN model can be further regarded as a generalized multilayer nonlinear tensor network, which can efficiently extract low-dimensional features of the data while maintaining the original structure information. In addition, to more effectively represent the local information of data, we introduce multiple convolution layers in QTNN to extract the local features. We also develop a high-order back-propagation algorithm for training the parameters of QTNN. We conducted classification experiments on multiple representative datasets to further evaluate the performance of proposed models, and the experimental results show that QTNN is simpler and more efficient while compared to the classic deep learning models.
APA, Harvard, Vancouver, ISO, and other styles
8

Marton, Sascha, Stefan Lüdtke, and Christian Bartelt. "Explanations for Neural Networks by Neural Networks." Applied Sciences 12, no. 3 (January 18, 2022): 980. http://dx.doi.org/10.3390/app12030980.

Full text
Abstract:
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate model on a sample-basis, which can be very time-consuming. Therefore, these approaches are not applicable in scenarios where timely or frequent explanations are required. In this paper, we introduce a real-time approach for generating a symbolic representation of the function learned by a neural network. Our idea is to generate explanations via another neural network (called the Interpretation Network, or I-Net), which maps network parameters to a symbolic representation of the network function. We show that the training of an I-Net for a family of functions can be performed up-front and subsequent generation of an explanation only requires querying the I-Net once, which is computationally very efficient and does not require training data. We empirically evaluate our approach for the case of low-order polynomials as explanations, and show that it achieves competitive results for various data and function complexities. To the best of our knowledge, this is the first approach that attempts to learn mapping from neural networks to symbolic representations.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Huan, Pengchuan Zhang, and Cho-Jui Hsieh. "RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5757–64. http://dx.doi.org/10.1609/aaai.v33i01.33015757.

Full text
Abstract:
The Jacobian matrix (or the gradient for single-output networks) is directly related to many important properties of neural networks, such as the function landscape, stationary points, (local) Lipschitz constants and robustness to adversarial attacks. In this paper, we propose a recursive algorithm, RecurJac, to compute both upper and lower bounds for each element in the Jacobian matrix of a neural network with respect to network’s input, and the network can contain a wide range of activation functions. As a byproduct, we can efficiently obtain a (local) Lipschitz constant, which plays a crucial role in neural network robustness verification, as well as the training stability of GANs. Experiments show that (local) Lipschitz constants produced by our method is of better quality than previous approaches, thus providing better robustness verification results. Our algorithm has polynomial time complexity, and its computation time is reasonable even for relatively large networks. Additionally, we use our bounds of Jacobian matrix to characterize the landscape of the neural network, for example, to determine whether there exist stationary points in a local neighborhood.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Wenqiang, Jiemin Fang, Xinggang Wang, and Wenyu Liu. "EfficientPose: Efficient human pose estimation with neural architecture search." Computational Visual Media 7, no. 3 (April 7, 2021): 335–47. http://dx.doi.org/10.1007/s41095-021-0214-z.

Full text
Abstract:
AbstractHuman pose estimation from image and video is a key task in many multimedia applications. Previous methods achieve great performance but rarely take efficiency into consideration, which makes it difficult to implement the networks on lightweight devices. Nowadays, real-time multimedia applications call for more efficient models for better interaction. Moreover, most deep neural networks for pose estimation directly reuse networks designed for image classification as the backbone, which are not optimized for the pose estimation task. In this paper, we propose an efficient framework for human pose estimation with two parts, an efficient backbone and an efficient head. By implementing a differentiable neural architecture search method, we customize the backbone network design for pose estimation, and reduce computational cost with negligible accuracy degradation. For the efficient head, we slim the transposed convolutions and propose a spatial information correction module to promote the performance of the final prediction. In experiments, we evaluate our networks on the MPII and COCO datasets. Our smallest model requires only 0.65 GFLOPs with 88.1% PCKh@0.5 on MPII and our large model needs only 2 GFLOPs while its accuracy is competitive with the state-of-the-art large model, HRNet, which takes 9.5 GFLOPs.
APA, Harvard, Vancouver, ISO, and other styles
11

Guo, Qingbei, Xiao-Jun Wu, Josef Kittler, and Zhiquan Feng. "Weak sub-network pruning for strong and efficient neural networks." Neural Networks 144 (December 2021): 614–26. http://dx.doi.org/10.1016/j.neunet.2021.09.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hosny, Khalid M., Marwa M. Khashaba, Walid I. Khedr, and Fathy A. Amer. "An Efficient Neural Network-Based Prediction Scheme for Heterogeneous Networks." International Journal of Sociotechnology and Knowledge Development 12, no. 2 (April 2020): 63–76. http://dx.doi.org/10.4018/ijskd.2020040104.

Full text
Abstract:
In mobile wireless networks, the challenge of providing full mobility without affecting the quality of service (QoS) is becoming essential. These challenges can be overcome using handover prediction. The process of determining the next station which mobile user desires to transfer its data connection can be termed as handover prediction. A new proposed prediction scheme is presented in this article dependent on scanning all signal quality between the mobile user and all neighboring stations in the surrounding areas. Additionally, the proposed scheme efficiency is enhanced essentially for minimizing the redundant handover (unnecessary handovers) numbers. Both WLAN and long term evolution (LTE) networks are used in the proposed scheme which is evaluated using various scenarios with several numbers and locations of mobile users and with different numbers and locations of WLAN access point and LTE base station, all randomly. The proposed prediction scheme achieves a success rate of up to 99% in several scenarios consistent with LTE-WLAN architecture. To understand the network characteristics for enhancing efficiency and increasing the handover successful percentage especially with mobile station high speeds, a neural network model is used. Using the trained network, it can predict the next target station for heterogeneous network handover points. The proposed neural network-based scheme added a significant improvement in the accuracy ratio compared to the existing schemes using only the received signal strength (RSS) as a parameter in predicting the next station. It achieves a remarkable improvement in successful percentage ratio up to 5% compared with using only RSS.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Guanzheng, Rubin Wang, Wanzeng Kong, and Jianhai Zhang. "The Relationship between Sparseness and Energy Consumption of Neural Networks." Neural Plasticity 2020 (November 25, 2020): 1–13. http://dx.doi.org/10.1155/2020/8848901.

Full text
Abstract:
About 50-80% of total energy is consumed by signaling in neural networks. A neural network consumes much energy if there are many active neurons in the network. If there are few active neurons in a neural network, the network consumes very little energy. The ratio of active neurons to all neurons of a neural network, that is, the sparseness, affects the energy consumption of a neural network. Laughlin’s studies show that the sparseness of an energy-efficient code depends on the balance between signaling and fixed costs. Laughlin did not give an exact ratio of signaling to fixed costs, nor did they give the ratio of active neurons to all neurons in most energy-efficient neural networks. In this paper, we calculated the ratio of signaling costs to fixed costs by the data from physiology experiments. The ratio of signaling costs to fixed costs is between 1.3 and 2.1. We calculated the ratio of active neurons to all neurons in most energy-efficient neural networks. The ratio of active neurons to all neurons in neural networks is between 0.3 and 0.4. Our results are consistent with the data from many relevant physiological experiments, indicating that the model used in this paper may meet neural coding under real conditions. The calculation results of this paper may be helpful to the study of neural coding.
APA, Harvard, Vancouver, ISO, and other styles
14

Zeng, Zhi, Pengpeng Shi, Fulei Ma, and Peihan Qi. "Parallel Frequency Function-Deep Neural Network for Efficient Approximation of Complex Broadband Signals." Sensors 22, no. 19 (September 28, 2022): 7347. http://dx.doi.org/10.3390/s22197347.

Full text
Abstract:
In recent years, with the growing popularity of complex signal approximation via deep neural networks, people have begun to pay close attention to the spectral bias of neural networks—a problem that occurs when a neural network is used to fit broadband signals. An important direction taken to overcome this problem is the use of frequency selection-based fitting techniques, of which the representative work is called the PhaseDNN method, whose core idea is the use of bandpass filters to extract frequency bands with high energy concentration and fit them by different neural networks. Despite the method’s high accuracy, we found in a large number of experiments that the method is less efficient for fitting broadband signals with smooth spectrums. In order to substantially improve its efficiency, a novel candidate—the parallel frequency function-deep neural network (PFF-DNN)—is proposed by utilizing frequency domain analysis of broadband signals and the spectral bias nature of neural networks. A substantial improvement in efficiency was observed in the extensive numerical experiments. Thus, the PFF-DNN method is expected to become an alternative solution for broadband signal fitting.
APA, Harvard, Vancouver, ISO, and other styles
15

Noskova, E. S., I. E. Zakharov, Y. N. Shkandybin, and S. G. Rykovanov. "Towards energy-efficient neural network calculations." Computer Optics 46, no. 1 (February 2022): 160–66. http://dx.doi.org/10.18287/2412-6179-co-914.

Full text
Abstract:
Nowadays, the problem of creating high-performance and energy-efficient hardware for Artificial Intelligence tasks is very acute. The most popular solution to this problem is the use of Deep Learning Accelerators, such as GPUs and Tensor Processing Units to run neural networks. Recently, NVIDIA has announced the NVDLA project, which allows one to design neural network accelerators based on an open-source code. This work describes a full cycle of creating a prototype NVDLA accelerator, as well as testing the resulting solution by running the resnet-50 neural network on it. Finally, an assessment of the performance and power efficiency of the prototype NVDLA accelerator when compared to the GPU and CPU is provided, the results of which show the superiority of NVDLA in many characteristics.
APA, Harvard, Vancouver, ISO, and other styles
16

Oliver Muncharaz, J. "Hybrid fuzzy neural network versus backpropagation neural network: An application to predict the Ibex-35 index stock." Finance, Markets and Valuation 6, no. 1 (2020): 85–98. http://dx.doi.org/10.46503/alep9985.

Full text
Abstract:
The use of neural networks has been extended in all areas of knowledge due to the good results being obtained in the resolution of the different problems posed. The prediction of prices in general, and stock market prices in particular, represents one of the main objectives of the use of neural networks in finance. This paper presents the analysis of the efficiency of the hybrid fuzzy neural network against a backpropagation type neural network in the price prediction of the Spanish stock exchange index (IBEX-35). The paper is divided into two parts. In the first part, the main characteristics of neural networks such as hybrid fuzzy and backpropagation, their structures and learning rules are presented. In the second part, the prediction of the IBEX-35 stock exchange index with these networks is analyzed, measuring the efficiency of both as a function of the prediction errors committed. For this purpose, both networks have been constructed with the same inputs and for the same sample period. The results obtained suggest that the Hybrid fuzzy neuronal network is much more efficient than the widespread backpropagation neuronal network for the sample analysed.
APA, Harvard, Vancouver, ISO, and other styles
17

Karayiannis, N. B., and A. N. Venetsanopoulos. "Efficient learning algorithms for neural networks (ELEANNE)." IEEE Transactions on Systems, Man, and Cybernetics 23, no. 5 (1993): 1372–83. http://dx.doi.org/10.1109/21.260668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Shell, John, and William D. Gregory. "Efficient Cancer Detection Using Multiple Neural Networks." IEEE Journal of Translational Engineering in Health and Medicine 5 (2017): 1–7. http://dx.doi.org/10.1109/jtehm.2017.2757471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Wei, and Liqiang Zhu. "Channel Pruning for Efficient Convolution Neural Networks." Journal of Physics: Conference Series 1302 (August 2019): 022073. http://dx.doi.org/10.1088/1742-6596/1302/2/022073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Gottapu, Ram Deepak, and Cihan H. Dagli. "Efficient Architecture Search for Deep Neural Networks." Procedia Computer Science 168 (2020): 19–25. http://dx.doi.org/10.1016/j.procs.2020.02.246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Jiang, Suoliang, Deren Han, and Xiaoming Yuan. "Efficient neural networks for solving variational inequalities." Neurocomputing 86 (June 2012): 97–106. http://dx.doi.org/10.1016/j.neucom.2012.01.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Sossa, Humberto, and Elizabeth Guevara. "Efficient training for dendrite morphological neural networks." Neurocomputing 131 (May 2014): 132–42. http://dx.doi.org/10.1016/j.neucom.2013.10.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Bodnár, Péter, Tamás Grósz, László Tóth, and László G. Nyúl. "Efficient visual code localization with neural networks." Pattern Analysis and Applications 21, no. 1 (April 17, 2017): 249–60. http://dx.doi.org/10.1007/s10044-017-0619-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zhu, Zhiying, Hang Zhou, Siyuan Xing, Zhenxing Qian, Sheng Li, and Xinpeng Zhang. "Perceptual Hash of Neural Networks." Symmetry 14, no. 4 (April 13, 2022): 810. http://dx.doi.org/10.3390/sym14040810.

Full text
Abstract:
In recent years, advances in deep learning have boosted the practical development, distribution and implementation of deep neural networks (DNNs). The concept of symmetry is often adopted in a deep neural network to construct an efficient network structure tailored for a specific task, such as the classic encoder-decoder structure. Massive DNN models are diverse in category, quantity and open source frameworks for implementation. Therefore, the retrieval of DNN models has become a problem worthy of attention. To this end, we propose a new idea of generating perceptual hashes of DNN models, named HNN-Net (Hash Neural Network), to index similar DNN models by similar hash codes. The proposed HNN-Net is based on neural graph networks consisting of two stages: the graph generator and the graph hashing. In the graph generator stage, the target DNN model is first converted and optimized into a graph. Then, it is assigned with additional information extracted from the execution of the original model. In the graph hashing stage, it learns to construct a compact binary hash code. The constructed hash function can well preserve the features of both the topology structure and the semantics information of a neural network model. Experimental results demonstrate that the proposed scheme is effective to represent a neural network with a short hash code, and it is generalizable and efficient on different models.
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Shiwei, Iftitahu Ni’mah, Vlado Menkovski, Decebal Constantin Mocanu, and Mykola Pechenizkiy. "Efficient and effective training of sparse recurrent neural networks." Neural Computing and Applications 33, no. 15 (January 26, 2021): 9625–36. http://dx.doi.org/10.1007/s00521-021-05727-y.

Full text
Abstract:
AbstractRecurrent neural networks (RNNs) have achieved state-of-the-art performances on various applications. However, RNNs are prone to be memory-bandwidth limited in practical applications and need both long periods of training and inference time. The aforementioned problems are at odds with training and deploying RNNs on resource-limited devices where the memory and floating-point operations (FLOPs) budget are strictly constrained. To address this problem, conventional model compression techniques usually focus on reducing inference costs, operating on a costly pre-trained model. Recently, dynamic sparse training has been proposed to accelerate the training process by directly training sparse neural networks from scratch. However, previous sparse training techniques are mainly designed for convolutional neural networks and multi-layer perceptron. In this paper, we introduce a method to train intrinsically sparse RNN models with a fixed number of parameters and floating-point operations (FLOPs) during training. We demonstrate state-of-the-art sparse performance with long short-term memory and recurrent highway networks on widely used tasks, language modeling, and text classification. We simply use the results to advocate that, contrary to the general belief that training a sparse neural network from scratch leads to worse performance than dense networks, sparse training with adaptive connectivity can usually achieve better performance than dense models for RNNs.
APA, Harvard, Vancouver, ISO, and other styles
26

Veselý, A. "Economic classification and regression problems and neural networks." Agricultural Economics (Zemědělská ekonomika) 57, No. 3 (March 29, 2011): 150–57. http://dx.doi.org/10.17221/50/2010-agricecon.

Full text
Abstract:
Artificial neural networks provide powerful models for solving many economic classifications, as well as regression problems. For example, they were successfully used for the discrimination between healthy economic agents and those prone to bankruptcy, for the inflation-deflation forecasting, for the currency exchange rates prediction, or for the prediction of share prices. At present, the neural models are part of the majority of standard statistical software packages. This paper discusses the basic principles, which the neural network models are based on, and sum up the important principles that must be respected in order that their utilization in practice is efficient.  
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Xin, Yueqiu Jiang, Hongwei Gao, Wei Yang, Zhihong Liang, and Bo Liu. "Power-Efficient Trainable Neural Networks towards Accurate Measurement of Irregular Cavity Volume." Electronics 11, no. 13 (July 1, 2022): 2073. http://dx.doi.org/10.3390/electronics11132073.

Full text
Abstract:
Irregular cavity volume measurement is a critical step in industrial production. This technology is used in a wide variety of applications. Traditional studies, such as waterflooding-based methods, have suffered from the following shortcomings, i.e., significant measurement error, low efficiency, complicated operation, and corrosion of devices. Recently, neural networks based on the air compression principle have been proposed to achieve irregular cavity volume measurement. However, the balance between data quality, network computation speed, convergence, and measurement accuracy is still underexplored. In this paper, we propose novel neural networks to achieve accurate measurement of irregular cavity volume. First, we propose a measurement method based on the air compression principle to analyze seven key parameters comprehensively. Moreover, we integrate the Hilbert–Schmidt independence criterion (HSIC) into fully connected neural networks (FCNNs) to build a trainable framework. This enables the proposed method to achieve power-efficient training. We evaluate the proposed neural network in the real world and compare it with typical procedures. The results show that the proposed method achieves the top performance for measurement accuracy and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
28

Yan, Yilin, Min Chen, Saad Sadiq, and Mei-Ling Shyu. "Efficient Imbalanced Multimedia Concept Retrieval by Deep Learning on Spark Clusters." International Journal of Multimedia Data Engineering and Management 8, no. 1 (January 2017): 1–20. http://dx.doi.org/10.4018/ijmdem.2017010101.

Full text
Abstract:
The classification of imbalanced datasets has recently attracted significant attention due to its implications in several real-world use cases. The classifiers developed on datasets with skewed distributions tend to favor the majority classes and are biased against the minority class. Despite extensive research interests, imbalanced data classification remains a challenge in data mining research, especially for multimedia data. Our attempt to overcome this hurdle is to develop a convolutional neural network (CNN) based deep learning solution integrated with a bootstrapping technique. Considering that convolutional neural networks are very computationally expensive coupled with big training datasets, we propose to extract features from pre-trained convolutional neural network models and feed those features to another full connected neutral network. Spark implementation shows promising performance of our model in handling big datasets with respect to feasibility and scalability.
APA, Harvard, Vancouver, ISO, and other styles
29

Pomerleau, Dean A. "Efficient Training of Artificial Neural Networks for Autonomous Navigation." Neural Computation 3, no. 1 (February 1991): 88–97. http://dx.doi.org/10.1162/neco.1991.3.1.88.

Full text
Abstract:
The ALVINN (Autonomous Land Vehicle In a Neural Network) project addresses the problem of training artificial neural networks in real time to perform difficult perception tasks. ALVINN is a backpropagation network designed to drive the CMU Navlab, a modified Chevy van. This paper describes the training techniques that allow ALVINN to learn in under 5 minutes to autonomously control the Navlab by watching the reactions of a human driver. Using these techniques, ALVINN has been trained to drive in a variety of circumstances including single-lane paved and unpaved roads, and multilane lined and unlined roads, at speeds of up to 20 miles per hour.
APA, Harvard, Vancouver, ISO, and other styles
30

Chai, Enhui, Wei Yu, Tianxiang Cui, Jianfeng Ren, and Shusheng Ding. "An Efficient Asymmetric Nonlinear Activation Function for Deep Neural Networks." Symmetry 14, no. 5 (May 17, 2022): 1027. http://dx.doi.org/10.3390/sym14051027.

Full text
Abstract:
As a key step to endow the neural network with nonlinear factors, the activation function is crucial to the performance of the network. This paper proposes an Efficient Asymmetric Nonlinear Activation Function (EANAF) for deep neural networks. Compared with existing activation functions, the proposed EANAF requires less computational effort, and it is self-regularized, asymmetric and non-monotonic. These desired characteristics facilitate the outstanding performance of the proposed EANAF. To demonstrate the effectiveness of this function in the field of object detection, the proposed activation function is compared with several state-of-the-art activation functions on the typical backbone networks such as ResNet and DSPDarkNet. The experimental results demonstrate the superior performance of the proposed EANAF.
APA, Harvard, Vancouver, ISO, and other styles
31

Oh, Seokjin, Jiyong An, and Kyeong-Sik Min. "Area-Efficient Mapping of Convolutional Neural Networks to Memristor Crossbars Using Sub-Image Partitioning." Micromachines 14, no. 2 (January 25, 2023): 309. http://dx.doi.org/10.3390/mi14020309.

Full text
Abstract:
Memristor crossbars can be very useful for realizing edge-intelligence hardware, because the neural networks implemented by memristor crossbars can save significantly more computing energy and layout area than the conventional CMOS (complementary metal–oxide–semiconductor) digital circuits. One of the important operations used in neural networks is convolution. For performing the convolution by memristor crossbars, the full image should be partitioned into several sub-images. By doing so, each sub-image convolution can be mapped to small-size unit crossbars, of which the size should be defined as 128 × 128 or 256 × 256 to avoid the line resistance problem caused from large-size crossbars. In this paper, various convolution schemes with 3D, 2D, and 1D kernels are analyzed and compared in terms of neural network’s performance and overlapping overhead. The neural network’s simulation indicates that the 2D + 1D kernels can perform the sub-image convolution using a much smaller number of unit crossbars with less rate loss than the 3D kernels. When the CIFAR-10 dataset is tested, the mapping of sub-image convolution of 2D + 1D kernels to crossbars shows that the number of unit crossbars can be reduced almost by 90% and 95%, respectively, for 128 × 128 and 256 × 256 crossbars, compared with the 3D kernels. On the contrary, the rate loss of 2D + 1D kernels can be less than 2%. To improve the neural network’s performance more, the 2D + 1D kernels can be combined with 3D kernels in one neural network. When the normalized ratio of 2D + 1D layers is around 0.5, the neural network’s performance indicates very little rate loss compared to when the normalized ratio of 2D + 1D layers is zero. However, the number of unit crossbars for the normalized ratio = 0.5 can be reduced by half compared with that for the normalized ratio = 0.
APA, Harvard, Vancouver, ISO, and other styles
32

Ping He, Ping He, Siyuan Ma Ping He, and Weidong Li Siyuan Ma. "Efficient Barrage Video Recommendation Algorithm Based on Convolutional and Recursive Neural Network." 網際網路技術學刊 22, no. 6 (November 2021): 1241–51. http://dx.doi.org/10.53106/160792642021112206004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sourek, Gustav, Vojtech Aschenbrenner, Filip Zelezny, Steven Schockaert, and Ondrej Kuzelka. "Lifted Relational Neural Networks: Efficient Learning of Latent Relational Structures." Journal of Artificial Intelligence Research 62 (May 17, 2018): 69–100. http://dx.doi.org/10.1613/jair.1.11203.

Full text
Abstract:
We propose a method to combine the interpretability and expressive power of firstorder logic with the effectiveness of neural network learning. In particular, we introduce a lifted framework in which first-order rules are used to describe the structure of a given problem setting. These rules are then used as a template for constructing a number of neural networks, one for each training and testing example. As the different networks corresponding to different examples share their weights, these weights can be efficiently learned using stochastic gradient descent. Our framework provides a flexible way for implementing and combining a wide variety of modelling constructs. In particular, the use of first-order logic allows for a declarative specification of latent relational structures, which can then be efficiently discovered in a given data set using neural network learning. Experiments on 78 relational learning benchmarks clearly demonstrate the effectiveness of the framework.
APA, Harvard, Vancouver, ISO, and other styles
34

Kojic, Nenad, Irini Reljin, and Branimir Reljin. "Neural network for optimization of routing in communication networks." Facta universitatis - series: Electronics and Energetics 19, no. 2 (2006): 317–29. http://dx.doi.org/10.2298/fuee0602317k.

Full text
Abstract:
The efficient neural network algorithm for optimization of routing in communication networks is suggested. As it was known from literature different optimization and ill-defined problems may be resolved using appropriately designed neural networks, due to their high computational speed and the possibility of working with uncertain data. Under some assumptions the routing in packet-switched communication networks may be considered as optimization problem, more precisely, as a shortest-path problem. The Hopfield-type neural network is a very efficient tool for solving such problems. The suggested routing algorithm is designed to find the optimal path, meaning, the shortest path (if possible), but taking into account the traffic conditions: the incoming traffic flow, routers occupancy, and link capacities, avoiding the packet loss due to the input buffer overflow. The applicability of the proposed model is demonstrated through computer simulations in different traffic conditions and for different full-connected networks with both symmetrical and non-symmetrical links.
APA, Harvard, Vancouver, ISO, and other styles
35

LIEBOVITCH, LARRY S., NIKITA D. ARNOLD, and LEV Y. SELECTOR. "NEURAL NETWORKS TO COMPUTE MOLECULAR DYNAMICS." Journal of Biological Systems 02, no. 02 (June 1994): 193–228. http://dx.doi.org/10.1142/s0218339094000155.

Full text
Abstract:
Large molecules such as proteins have many of the properties of neural networks. Hence, neural networks may serve as a natural and thus efficient method to compute the time dependent changes of the structure in large molecules. We describe how to encode the spatial conformation and energy structure of a molecule in a neural network. The dynamics of the molecule can then be computed from the dynamics of the corresponding neural network. As a detailed example, we formulated a Hopfield network to compute the molecular dynamics of a small molecule, cyclohexane. We used this network to determine the distribution of times spent in the twist and chair conformational states as the cyclohexane thermally switches between these two states.
APA, Harvard, Vancouver, ISO, and other styles
36

Inohira, Eiichi, and Hirokazu Yokoi. "An Optimal Design Method for Artificial Neural Networks by Using the Design of Experiments." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 6 (July 20, 2007): 593–99. http://dx.doi.org/10.20965/jaciii.2007.p0593.

Full text
Abstract:
This paper presents a method to optimally design artificial neural networks with many design parameters using the Design of Experiment (DOE), whose features are efficient experiments using an orthogonal array and quantitative analysis by analysis of variance. Neural networks can approximate arbitrary nonlinear functions. The accuracy of a trained neural network at a certain number of learning cycles depends on both weights and biases and its structure and learning rate. Design methods such as trial-and-error, brute-force approaches, network construction, and pruning, cannot deal with many design parameters such as the number of elements in a layer and a learning rate. Our design method realizes efficient optimization using DOE, and obtains confidence of optimal design through statistical analysis even though trained neural networks very due to randomness in initial weights. We apply our design method three-layer and five-layer feedforward neural networks in a preliminary study and show that approximation accuracy of multilayer neural networks is increased by picking up many more parameters.
APA, Harvard, Vancouver, ISO, and other styles
37

Yilmaz, Muhammed, Ahmet Murat Ozbayoglu, and Bulent Tavli. "Efficient computation of wireless sensor network lifetime through deep neural networks." Wireless Networks 27, no. 3 (February 10, 2021): 2055–65. http://dx.doi.org/10.1007/s11276-021-02556-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Varun, Rajesh Kumar, Rakesh C. Gangwar, Omprakash Kaiwartya, and Geetika Aggarwal. "Energy-Efficient Routing Using Fuzzy Neural Network in Wireless Sensor Networks." Wireless Communications and Mobile Computing 2021 (August 3, 2021): 1–13. http://dx.doi.org/10.1155/2021/5113591.

Full text
Abstract:
In wireless sensor networks, energy is a precious resource that should be utilized wisely to improve its life. Uneven distribution of load over sensor devices is also the reason for the depletion of energy that can cause interruptions in network operations as well. For the next generation’s ubiquitous sensor networks, a single artificial intelligence methodology is not able to resolve the issue of energy and load. Therefore, this paper proposes an energy-efficient routing using a fuzzy neural network (ERFN) to minimize the energy consumption while fairly equalizing energy consumption among sensors thus as to prolong the lifetime of the WSN. The algorithm utilizes fuzzy logic and neural network concepts for the intelligent selection of cluster head (CH) that will precisely consume equal energy of the sensors. In this work, fuzzy rules, sets, and membership functions are developed to make decisions regarding next-hop selection based on the total residual energy, link quality, and forward progress towards the sink. The developed algorithm ERFN proofs its efficiency as compared to the state-of-the-art algorithms concerning the number of alive nodes, percentage of dead nodes, average energy decay, and standard deviation of residual energy.
APA, Harvard, Vancouver, ISO, and other styles
39

Kim, Seungnyun, Junwon Son, and Byonghyo Shim. "Energy-Efficient Ultra-Dense Network Using LSTM-based Deep Neural Networks." IEEE Transactions on Wireless Communications 20, no. 7 (July 2021): 4702–15. http://dx.doi.org/10.1109/twc.2021.3061577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Nabavinejad, Seyed Morteza, Mohammad Baharloo, Kun-Chih Chen, Maurizio Palesi, Tim Kogel, and Masoumeh Ebrahimi. "An Overview of Efficient Interconnection Networks for Deep Neural Network Accelerators." IEEE Journal on Emerging and Selected Topics in Circuits and Systems 10, no. 3 (September 2020): 268–82. http://dx.doi.org/10.1109/jetcas.2020.3022920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Tosh, Colin R., and Luke McNally. "The relative efficiency of modular and non-modular networks of different size." Proceedings of the Royal Society B: Biological Sciences 282, no. 1802 (March 7, 2015): 20142568. http://dx.doi.org/10.1098/rspb.2014.2568.

Full text
Abstract:
Most biological networks are modular but previous work with small model networks has indicated that modularity does not necessarily lead to increased functional efficiency. Most biological networks are large, however, and here we examine the relative functional efficiency of modular and non-modular neural networks at a range of sizes. We conduct a detailed analysis of efficiency in networks of two size classes: ‘small’ and ‘large’, and a less detailed analysis across a range of network sizes. The former analysis reveals that while the modular network is less efficient than one of the two non-modular networks considered when networks are small, it is usually equally or more efficient than both non-modular networks when networks are large. The latter analysis shows that in networks of small to intermediate size, modular networks are much more efficient that non-modular networks of the same (low) connective density. If connective density must be kept low to reduce energy needs for example, this could promote modularity. We have shown how relative functionality/performance scales with network size, but the precise nature of evolutionary relationship between network size and prevalence of modularity will depend on the costs of connectivity.
APA, Harvard, Vancouver, ISO, and other styles
42

Pavićević, Milutin, and Tomo Popović. "Forecasting Day-Ahead Electricity Metrics with Artificial Neural Networks." Sensors 22, no. 3 (January 28, 2022): 1051. http://dx.doi.org/10.3390/s22031051.

Full text
Abstract:
As artificial neural network architectures grow increasingly more efficient in time-series prediction tasks, their use for day-ahead electricity price and demand prediction, a task with very specific rules and highly volatile dataset values, grows more attractive. Without a standardized way to compare the efficiency of algorithms and methods for forecasting electricity metrics, it is hard to have a good sense of the strengths and weaknesses of each approach. In this paper, we create models in several neural network architectures for predicting the electricity price on the HUPX market and electricity load in Montenegro and compare them to multiple neural network models on the same basis (using the same dataset and metrics). The results show the promising efficiency of neural networks in general for the task of short-term prediction in the field, with methods combining fully connected layers and recurrent neural or temporal convolutional layers performing the best. The feature extraction power of convolutional layers shows very promising results and recommends the further exploration of temporal convolutional networks in the field.
APA, Harvard, Vancouver, ISO, and other styles
43

GHOSH, JOYDEEP, and YOAN SHIN. "EFFICIENT HIGHER-ORDER NEURAL NETWORKS FOR CLASSIFICATION AND FUNCTION APPROXIMATION." International Journal of Neural Systems 03, no. 04 (January 1992): 323–50. http://dx.doi.org/10.1142/s0129065792000255.

Full text
Abstract:
This paper introduces a class of higher-order networks called pi-sigma networks (PSNs). PSNs are feedforward networks with a single “hidden” layer of linear summing units and with product units in the output layer. A PSN uses these product units to indirectly incorporate the capabilities of higher-order networks while greatly reducing network complexity. PSNs have only one layer of adjustable weights and exhibit fast learning. A PSN with K summing units provides a constrained Kth order approximation of a continuous function. A generalization of the PSN is presented that can uniformly approximate any continuous function defined on a compact set. The use of linear hidden units makes it possible to mathematically study the convergence properties of various LMS type learning algorithms for PSNs. We show that it is desirable to update only a partial set of weights at a time rather than synchronously updating all the weights. Bounds for learning rates which guarantee convergence are derived. Several simulation results on pattern classification and function approximation problems highlight the capabilities of the PSN. Extensive comparisons are made with other higher order networks and with multilayered perceptrons. The neurobiological plausibility of PSN type networks is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
44

Yotov, Kostadin, Emil Hadzhikolev, Stanka Hadzhikoleva, and Stoyan Cheresharov. "Finding the Optimal Topology of an Approximating Neural Network." Mathematics 11, no. 1 (January 1, 2023): 217. http://dx.doi.org/10.3390/math11010217.

Full text
Abstract:
A large number of researchers spend a lot of time searching for the most efficient neural network to solve a given problem. The procedure of configuration, training, testing, and comparison for expected performance is applied to each experimental neural network. The configuration parameters—training methods, transfer functions, number of hidden layers, number of neurons, number of epochs, and tolerable error—have multiple possible values. Setting guidelines for appropriate parameter values would shorten the time required to create an efficient neural network, facilitate researchers, and provide a tool to improve the performance of automated neural network search methods. The task considered in this paper is related to the determination of upper bounds for the number of hidden layers and the number of neurons in them for approximating artificial neural networks trained with algorithms using the Jacobi matrix in the error function. The derived formulas for the upper limits of the number of hidden layers and the number of neurons in them are proved theoretically, and the presented experiments confirm their validity. They show that the search for an efficient neural network can focus below certain upper bounds, and above them, it becomes pointless. The formulas provide researchers with a useful auxiliary tool in the search for efficient neural networks with optimal topology. They are applicable to neural networks trained with methods such as Levenberg–Marquardt, Gauss–Newton, Bayesian regularization, scaled conjugate gradient, BFGS quasi-Newton, etc., which use the Jacobi matrix.
APA, Harvard, Vancouver, ISO, and other styles
45

Wu, Chyuan-Tyng, Peter van Beek, Phillip Schmidt, Joao Peralta Moreira, and Thomas R. Gardos. "Evaluation of semi-frozen semi-fixed neural network for efficient computer vision inference." Electronic Imaging 2021, no. 17 (January 18, 2021): 213–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.17.avm-213.

Full text
Abstract:
Deep neural networks have been utilized in an increasing number of computer vision tasks, demonstrating superior performance. Much research has been focused on making deep networks more suitable for efficient hardware implementation, for low-power and low-latency real-time applications. In [1], Isikdogan et al. introduced a deep neural network design that provides an effective trade-off between flexibility and hardware efficiency. The proposed solution consists of fixed-topology hardware blocks, with partially frozen/partially trainable weights, that can be configured into a full network. Initial results in a few computer vision tasks were presented in [1]. In this paper, we further evaluate this network design by applying it to several additional computer vision use cases and comparing it to other hardware-friendly networks. The experimental results presented here show that the proposed semi-fixed semi-frozen design achieves competitive performanc on a variety of benchmarks, while maintaining very high hardware efficiency.
APA, Harvard, Vancouver, ISO, and other styles
46

Soulie, Françoise Fogelman. "PRACTICAL PROBLEMS USING NEURAL NETWORKS." International Journal of Neural Systems 03, supp01 (January 1992): 25–30. http://dx.doi.org/10.1142/s0129065792000346.

Full text
Abstract:
Neural Networks are very efficient for real world applications. However, practical problems often arise which can hinder performances. We discuss here some of these problems: under-representation of classes, rejection of outliers and ambiguous patterns, and illustrate the issues raised through various applications.
APA, Harvard, Vancouver, ISO, and other styles
47

Hong, Taeyang, Yongshin Kang, and Jaeyong Chung. "InSight: An FPGA-Based Neuromorphic Computing System for Deep Neural Networks." Journal of Low Power Electronics and Applications 10, no. 4 (October 30, 2020): 36. http://dx.doi.org/10.3390/jlpea10040036.

Full text
Abstract:
Deep neural networks have demonstrated impressive results in various cognitive tasks such as object detection and image classification. This paper describes a neuromorphic computing system that is designed from the ground up for energy-efficient evaluation of deep neural networks. The computing system consists of a non-conventional compiler, a neuromorphic hardware architecture, and a space-efficient microarchitecture that leverages existing integrated circuit design methodologies. The compiler takes a trained, feedforward network as input, compresses the weights linearly, and generates a time delay neural network reducing the number of connections significantly. The connections and units in the simplified network are mapped to silicon synapses and neurons. We demonstrate an implementation of the neuromorphic computing system based on a field-programmable gate array that performs image classification on the hand-wirtten 0 to 9 digits MNIST dataset with 99.37% accuracy consuming only 93uJ per image. For image classification on the colour images in 10 classes CIFAR-10 dataset, it achieves 83.43% accuracy at more than 11× higher energy-efficiency compared to a recent field-programmable gate array (FPGA)-based accelerator.
APA, Harvard, Vancouver, ISO, and other styles
48

Wu, Wang, Rigall, Li, Zhu, He, and Yan. "ECNet: Efficient Convolutional Networks for Side Scan Sonar Image Segmentation." Sensors 19, no. 9 (April 29, 2019): 2009. http://dx.doi.org/10.3390/s19092009.

Full text
Abstract:
This paper presents a novel and practical convolutional neural network architecture to implement semantic segmentation for side scan sonar (SSS) image. As a widely used sensor for marine survey, SSS provides higher-resolution images of the seafloor and underwater target. However, for a large number of background pixels in SSS image, the imbalance classification remains an issue. What is more, the SSS images contain undesirable speckle noise and intensity inhomogeneity. We define and detail a network and training strategy that tackle these three important issues for SSS images segmentation. Our proposed method performs image-to-image prediction by leveraging fully convolutional neural networks and deeply-supervised nets. The architecture consists of an encoder network to capture context, a corresponding decoder network to restore full input-size resolution feature maps from low-resolution ones for pixel-wise classification and a single stream deep neural network with multiple side-outputs to optimize edge segmentation. We performed prediction time of our network on our dataset, implemented on a NVIDIA Jetson AGX Xavier, and compared it to other similar semantic segmentation networks. The experimental results show that the presented method for SSS image segmentation brings obvious advantages, and is applicable for real-time processing tasks.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Hanlin, Xudong Zhang, Teli Ma, Haosong Yue, Xin Wang, and Baochang Zhang. "Efficient Facial Landmark Localization Based on Binarized Neural Networks." Electronics 9, no. 8 (July 31, 2020): 1236. http://dx.doi.org/10.3390/electronics9081236.

Full text
Abstract:
Facial landmark localization is a significant yet challenging computer vision task, whose accuracy has been remarkably improved due to the successful application of deep Convolutional Neural Networks (CNNs). However, CNNs require huge storage and computation overhead, thus impeding their deployment on computationally limited platforms. In this paper, to the best of our knowledge, it is the first time that an efficient facial landmark localization is implemented via binarized CNNs. We introduce a new network architecture to calculate the binarized models, referred to as Amplitude Convolutional Networks (ACNs), based on the proposed asynchronous back propagation algorithm. We can efficiently recover the full-precision filters only using a single factor in an end-to-end manner, and the efficiency of CNNs for facial landmark localization is further improved by the extremely compressed 1-bit ACNs. Our ACNs reduce the storage space of convolutional filters by a factor of 32 compared with the full-precision models on dataset LFW+Webface, CelebA, BioID and 300W, while achieving a comparable performance to the full-precision facial landmark localization algorithms.
APA, Harvard, Vancouver, ISO, and other styles
50

Ghimire, Deepak, Dayoung Kil, and Seong-heum Kim. "A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration." Electronics 11, no. 6 (March 18, 2022): 945. http://dx.doi.org/10.3390/electronics11060945.

Full text
Abstract:
Over the past decade, deep-learning-based representations have demonstrated remarkable performance in academia and industry. The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction layers that fully utilize a large amount of data. However, they often require substantial computation and memory resources while replacing traditional hand-engineered features in existing systems. In this review, to improve the efficiency of deep learning research, we focus on three aspects: quantized/binarized models, optimized architectures, and resource-constrained systems. Recent advances in light-weight deep learning models and network architecture search (NAS) algorithms are reviewed, starting with simplified layers and efficient convolution and including new architectural design and optimization. In addition, several practical applications of efficient CNNs have been investigated using various types of hardware architectures and platforms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography