To see the other types of publications on this topic, follow the link: Convolutive Neural Networks.

Journal articles on the topic 'Convolutive Neural Networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Convolutive Neural Networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

KIREI, B. S., M. D. TOPA, I. MURESAN, I. HOMANA, and N. TOMA. "Blind Source Separation for Convolutive Mixtures with Neural Networks." Advances in Electrical and Computer Engineering 11, no. 1 (2011): 63–68. http://dx.doi.org/10.4316/aece.2011.01010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Karhunen, J., A. Cichocki, W. Kasprzak, and P. Pajunen. "On Neural Blind Separation with Noise Suppression and Redundancy Reduction." International Journal of Neural Systems 08, no. 02 (April 1997): 219–37. http://dx.doi.org/10.1142/s0129065797000239.

Full text
Abstract:
Noise is an unavoidable factor in real sensor signals. We study how additive and convolutive noise can be reduced or even eliminated in the blind source separation (BSS) problem. Particular attention is paid to cases in which the number of sensors is larger than the number of sources. We propose various methods and associated adaptive learning algorithms for such an extended BSS problem. Performance and validity of the proposed approaches are demonstrated by extensive computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
3

Duan, Yunlong, Ziyu Han, and Zhening Tang. "A lightweight plant disease recognition network based on Resnet." Applied and Computational Engineering 5, no. 1 (June 14, 2023): 583–92. http://dx.doi.org/10.54254/2755-2721/5/20230651.

Full text
Abstract:
Identification of foliar diseases is very important for the cultivation of plants. If no diseases are found, the cultivation results may decline, resulting in serious losses of related industries. Most of the early automatic recognition methods of plant leaves are based on manual features and classifiers, and the recognition performance is often unable to meet the actual complex application scenarios. Thanks to the rapid development of convolutional neural networks, such as ResNet, the accuracy of plant disease identification based on deep learning has made a breakthrough. However, convolutive neural network tends to have too many parameters, large amount of calculation and slow training speed, which is difficult to be used in various small and medium-sized plant cultivation industries, especially in small edge computing devices deployed in the field. This paper designs a new lightweight Resnet network structure, namely Resnet-9. The number of network layers in traditional Resnet is reduced. Compared with other commonly used plant disease recognition methods, the accuracy of Resnet is guaranteed and the network is more lightweight. The parameter of this model occupies only 6.6M memory and achieves 99.23% accuracy on public datasets. Even in the other data sets, the accuracy was still 95.15%. The effectiveness of the method is verified by comparative experiment.
APA, Harvard, Vancouver, ISO, and other styles
4

Tong, Lian, Lan Yang, Xuan Wang, and Li Liu. "Self-aware face emotion accelerated recognition algorithm: a novel neural network acceleration algorithm of emotion recognition for international students." PeerJ Computer Science 9 (September 26, 2023): e1611. http://dx.doi.org/10.7717/peerj-cs.1611.

Full text
Abstract:
With an increasing number of human-computer interaction application scenarios, researchers are looking for computers to recognize human emotions more accurately and efficiently. Such applications are desperately needed at universities, where people want to understand the students’ psychology in real time to avoid catastrophes. This research proposed a self-aware face emotion accelerated recognition algorithm (SFEARA) that improves the efficiency of convolutional neural networks (CNNs) in the recognition of facial emotions. SFEARA will recognize that critical and non-critical regions of input data perform high-precision computation and convolutive low-precision computation during the inference process, and finally combine the results, which can help us get the emotional recognition model for international students. Based on a comparison of experimental data, the SFEARA algorithm has 1.3× to 1.6× higher computational efficiency and 30% to 40% lower energy consumption than conventional CNNs in emotion recognition applications, is better suited to the real-time scenario with more background information.
APA, Harvard, Vancouver, ISO, and other styles
5

Sineglazov, Victor, and Petro Chynnyk. "Quantum Convolution Neural Network." Electronics and Control Systems 2, no. 76 (June 23, 2023): 40–45. http://dx.doi.org/10.18372/1990-5548.76.17667.

Full text
Abstract:
In this work, quantum convolutional neural networks are considered in the task of recognizing handwritten digits. A proprietary quantum scheme for the convolutional layer of a quantum convolutional neural network is proposed. A proprietary quantum scheme for the pooling layer of a quantum convolutional neural network is proposed. The results of learning quantum convolutional neural networks are analyzed. The built models were compared and the best one was selected based on the accuracy, recall, precision and f1-score metrics. A comparative analysis was made with the classic convolutional neural network based on accuracy, recall, precision and f1-score metrics. The object of the study is the task of recognizing numbers. The subject of research is convolutional neural network, quantum convolutional neural network. The result of this work can be applied in the further research of quantum computing in the tasks of artificial intelligence.
APA, Harvard, Vancouver, ISO, and other styles
6

Lü Benyuan, 吕本远, 禚真福 Zhuo Zhenfu, 韩永赛 Han Yongsai, and 张立朝 Zhang Lichao. "基于Faster区域卷积神经网络的目标检测." Laser & Optoelectronics Progress 58, no. 22 (2021): 2210017. http://dx.doi.org/10.3788/lop202158.2210017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Anmin, Kong, and Zhao Bin. "A Parallel Loading Based Accelerator for Convolution Neural Network." International Journal of Machine Learning and Computing 10, no. 5 (October 5, 2020): 669–74. http://dx.doi.org/10.18178/ijmlc.2020.10.5.989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sharma, Himanshu, and Rohit Agarwal. "Channel Enhanced Deep Convolution Neural Network based Cancer Classification." Journal of Advanced Research in Dynamical and Control Systems 11, no. 10-SPECIAL ISSUE (October 31, 2019): 610–17. http://dx.doi.org/10.5373/jardcs/v11sp10/20192849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Anem, Smt Jayalaxmi, B. Dharani, K. Raveendra, CH Nikhil, and K. Akhil. "Leveraging Convolution Neural Network (CNN) for Skin Cancer Identification." International Journal of Research Publication and Reviews 5, no. 4 (April 2024): 2150–55. http://dx.doi.org/10.55248/gengpi.5.0424.0955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Oh, Seokjin, Jiyong An, and Kyeong-Sik Min. "Area-Efficient Mapping of Convolutional Neural Networks to Memristor Crossbars Using Sub-Image Partitioning." Micromachines 14, no. 2 (January 25, 2023): 309. http://dx.doi.org/10.3390/mi14020309.

Full text
Abstract:
Memristor crossbars can be very useful for realizing edge-intelligence hardware, because the neural networks implemented by memristor crossbars can save significantly more computing energy and layout area than the conventional CMOS (complementary metal–oxide–semiconductor) digital circuits. One of the important operations used in neural networks is convolution. For performing the convolution by memristor crossbars, the full image should be partitioned into several sub-images. By doing so, each sub-image convolution can be mapped to small-size unit crossbars, of which the size should be defined as 128 × 128 or 256 × 256 to avoid the line resistance problem caused from large-size crossbars. In this paper, various convolution schemes with 3D, 2D, and 1D kernels are analyzed and compared in terms of neural network’s performance and overlapping overhead. The neural network’s simulation indicates that the 2D + 1D kernels can perform the sub-image convolution using a much smaller number of unit crossbars with less rate loss than the 3D kernels. When the CIFAR-10 dataset is tested, the mapping of sub-image convolution of 2D + 1D kernels to crossbars shows that the number of unit crossbars can be reduced almost by 90% and 95%, respectively, for 128 × 128 and 256 × 256 crossbars, compared with the 3D kernels. On the contrary, the rate loss of 2D + 1D kernels can be less than 2%. To improve the neural network’s performance more, the 2D + 1D kernels can be combined with 3D kernels in one neural network. When the normalized ratio of 2D + 1D layers is around 0.5, the neural network’s performance indicates very little rate loss compared to when the normalized ratio of 2D + 1D layers is zero. However, the number of unit crossbars for the normalized ratio = 0.5 can be reduced by half compared with that for the normalized ratio = 0.
APA, Harvard, Vancouver, ISO, and other styles
11

Reddy*, M. Venkata Krishna, and Pradeep S. "Envision Foundational of Convolution Neural Network." International Journal of Innovative Technology and Exploring Engineering 10, no. 6 (April 30, 2021): 54–60. http://dx.doi.org/10.35940/ijitee.f8804.0410621.

Full text
Abstract:
1. Bilal, A. Jourabloo, M. Ye, X. Liu, and L. Ren. Do Convolutional Neural Networks Learn Class Hierarchy? IEEE Transactions on Visualization and Computer Graphics, 24(1):152–162, Jan. 2018. 2. M. Carney, B. Webster, I. Alvarado, K. Phillips, N. Howell, J. Griffith, J. Jongejan, A. Pitaru, and A. Chen. Teachable Machine: Approachable Web-Based Tool for Exploring Machine Learning Classification. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20. ACM, Honolulu, HI, USA, 2020. 3. A. Karpathy. CS231n Convolutional Neural Networks for Visual Recognition, 2016 4. M. Kahng, N. Thorat, D. H. Chau, F. B. Viegas, and M. Wattenberg. GANLab: Understanding Complex Deep Generative Models using Interactive Visual Experimentation. IEEE Transactions on Visualization and Computer Graphics, 25(1):310–320, Jan. 2019. 5. J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson. Understanding Neural Networks Through Deep Visualization. In ICML Deep Learning Workshop, 2015 6. M. Kahng, P. Y. Andrews, A. Kalro, and D. H. Chau. ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models. IEEE Transactions on Visualization and Computer Graphics, 24(1):88–97, Jan. 2018. 7. https://cs231n.github.io/convolutional-networks/ 8. https://www.analyticsvidhya.com/blog/2020/02/learn-imageclassification-cnn-convolutional-neural-networks-3-datasets/ 9. https://towardsdatascience.com/understanding-cnn-convolutionalneural- network-69fd626ee7d4 10. https://medium.com/@birdortyedi_23820/deep-learning-lab-episode-2- cifar- 10-631aea84f11e 11. J. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, T. Liu, X. Wang, G. Wang, J. Cai, and T. Chen. Recent advances in convolutional neural networks. Pattern Recognition, 77:354–377, May 2018. 12. Hamid, Y., Shah, F.A. and Sugumaram, M. (2014), ―Wavelet neural network model for network intrusion detection system‖, International Journal of Information Technology, Vol. 11 No. 2, pp. 251-263 13. G Sreeram , S Pradeep, K SrinivasRao , B.Deevan Raju , Parveen Nikhat , ― Moving ridge neuronal espionage network simulation for reticulum invasion sensing‖. International Journal of Pervasive Computing and Communications.https://doi.org/10.1108/IJPCC-05- 2020-0036 14. E. Stevens, L. Antiga, and T. Viehmann. Deep Learning with PyTorch. O’Reilly Media, 2019. 15. J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson. Understanding Neural Networks Through Deep Visualization. In ICML Deep Learning Workshop, 2015. 16. Aman Dureja, Payal Pahwa, ―Analysis of Non-Linear Activation Functions for Classification Tasks Using Convolutional Neural Networks‖, Recent Advances in Computer Science , Vol 2, Issue 3, 2019 ,PP-156-161 17. https://missinglink.ai/guides/neural-network-concepts/7-types-neuralnetwork-activation-functions-right/
APA, Harvard, Vancouver, ISO, and other styles
12

Wang Xuanqi, 王选齐, 杨锋 Yang Feng, 曹斌 Cao Bin, 刘静 Liu Jing, 魏德健 Wei Dejian, and 曹慧 Cao Hui. "卷积神经网络在甲状腺结节诊断中的应用." Laser & Optoelectronics Progress 59, no. 8 (2022): 0800002. http://dx.doi.org/10.3788/lop202259.0800002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Lei. "Application Research of Deep Convolutional Neural Network in Computer Vision." Journal of Networking and Telecommunications 2, no. 2 (August 6, 2020): 23. http://dx.doi.org/10.18282/jnt.v2i2.886.

Full text
Abstract:
<p>As an important research achievement in the field of brain like computing, deep convolution neural network has been widely used in many fields such as computer vision, natural language processing, information retrieval, speech recognition, semantic understanding and so on. It has set off a wave of neural network research in industry and academia and promoted the development of artificial intelligence. At present, the deep convolution neural network mainly simulates the complex hierarchical cognitive laws of the human brain by increasing the number of layers of the network, using a larger training data set, and improving the network structure or training learning algorithm of the existing neural network, so as to narrow the gap with the visual system of the human brain and enable the machine to acquire the capability of "abstract concepts". Deep convolution neural network has achieved great success in many computer vision tasks such as image classification, target detection, face recognition, pedestrian recognition, etc. Firstly, this paper reviews the development history of convolutional neural networks. Then, the working principle of the deep convolution neural network is analyzed in detail. Then, this paper mainly introduces the representative achievements of convolution neural network from the following two aspects, and shows the improvement effect of various technical methods on image classification accuracy through examples. From the aspect of adding network layers, the structures of classical convolutional neural networks such as AlexNet, ZF-Net, VGG, GoogLeNet and ResNet are discussed and analyzed. From the aspect of increasing the size of data set, the difficulties of manually adding labeled samples and the effect of using data amplification technology on improving the performance of neural network are introduced. This paper focuses on the latest research progress of convolution neural network in image classification and face recognition. Finally, the problems and challenges to be solved in future brain-like intelligence research based on deep convolution neural network are proposed.</p>
APA, Harvard, Vancouver, ISO, and other styles
14

Haffner, Oto, Erik Kučera, Peter Drahoš, and Ján Cigánek. "Using Entropy for Welds Segmentation and Evaluation." Entropy 21, no. 12 (November 28, 2019): 1168. http://dx.doi.org/10.3390/e21121168.

Full text
Abstract:
In this paper, a methodology based on weld segmentation using entropy and evaluation by conventional and convolution neural networks to evaluate quality of welds is developed. Compared to conventional neural networks, there is no use of image preprocessing (weld segmentation based on entropy) or data representation for the convolution neural networks in our experiments. The experiments are performed on 6422 weld image samples and the performance results of both types of neural network are compared to the conventional methods. In all experiments, neural networks implemented and trained using the proposed approach delivered excellent results with a success rate of nearly 100%. The best results were achieved using convolution neural networks which provided excellent results and with almost no pre-processing of image data required.
APA, Harvard, Vancouver, ISO, and other styles
15

Yang Guowei, 杨国威, 周楠 Zhou Nan, 杨敏 Yang Min, 张永帅 Zhang Yongshuai, and 王以忠 Wang Yizhong. "融合卷积神经网络和相关滤波的焊缝自动跟踪." Chinese Journal of Lasers 48, no. 22 (2021): 2202011. http://dx.doi.org/10.3788/cjl202148.2202011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Xing Yongxin, 邢永鑫, 吴碧巧 Wu Biqiao, 吴松平 Wu Songping, and 王天一 Wang Tianyi. "基于卷积神经网络和迁移学习的奶牛个体识别." Laser & Optoelectronics Progress 58, no. 16 (2021): 1628002. http://dx.doi.org/10.3788/lop202158.1628002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Chen Wenhao, 陈文豪, 何敬 He Jing, and 刘刚 Liu Gang. "引入注意力机制的卷积神经网络高光谱图像分类." Laser & Optoelectronics Progress 59, no. 18 (2022): 1811001. http://dx.doi.org/10.3788/lop202259.1811001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Li Zhuorong, 李卓容, 唐云祁 Tang Yunqi, and 蔡能斌 Cai Nengbin. "基于卷积神经网络的现场勘查照片分类方法." Laser & Optoelectronics Progress 60, no. 4 (2023): 0410007. http://dx.doi.org/10.3788/lop212827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Xianhao Shen, Xianhao Shen, Changhong Zhu Xianhao Shen, Yihao Zang Changhong Zhu, and Shaohua Niu Yihao Zang. "A Method for Detecting Abnormal Data of Network Nodes Based on Convolutional Neural Network." 電腦學刊 33, no. 3 (June 2022): 049–58. http://dx.doi.org/10.53106/199115992022063303004.

Full text
Abstract:
<p>Abnormal data detection is an important step to ensure the accuracy and reliability of node data in wireless sensor networks. In this paper, a data classification method based on convolutional neural network is proposed to solve the problem of data anomaly detection in wireless sensor networks. First, Normal data and abnormal data generated after injection fault are normalized and mapped to gray image as input data of the convolutional neural network. Then, based on the classical convolution neural network, three new convolutional neural network models are designed by designing the parameters of the convolutional layer and the fully connected layer. This model solves the problem that the performance of traditional detection algorithm is easily affected by relevant threshold through self-learning data characteristics of convolution layer. The experimental results show that this method has better detection performance and higher reliability.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
20

Yan, Ming, and Zhe He. "Dance Action Recognition Model Using Deep Learning Network in Streaming Media Environment." Journal of Environmental and Public Health 2022 (September 12, 2022): 1–10. http://dx.doi.org/10.1155/2022/8955326.

Full text
Abstract:
Dance movement recognition is a video technology that has a significant impact on intelligent applications and is widely applied in many industries. In the training of intelligent dance assistants, this method can be used. Dancers’ postures can be reconstructed by taking the features out of their images. Examine and correct dancers’ postures in order to recognise their dance movements. The most crucial aspect of this technology is effectively extracting features, and deep learning is currently one of the best ways to do this for video features. In this paper, the dance movement recognition method is studied using a convolution neural network based on a deep learning network. The deep-learning-network-based convolution neural network is also used to conduct a simulation test, confirming the viability of this method for the recognition of dance movements. Due to the addition of manually extracted time-domain optical flow information, the convolution neural network’s accuracy in recognising dance movements has increased by 30.65% and 19.49% for InceptionV3 and 3D-CNN networks, respectively. It is evident from this that the convolution neural network suggested in this paper is more effective at identifying dance movements. Dance movements will continue to develop quickly thanks to the improvement of the in-depth recognition system for them. This technology has a wide range of applications in the instruction and practise of dance movements, and this research has promising application potential.
APA, Harvard, Vancouver, ISO, and other styles
21

Belorutsky, R. Yu, and S. V. Zhitnik. "SPEECH RECOGNITION BASED ON CONVOLUTION NEURAL NETWORKS." Issues of radio electronics, no. 4 (May 10, 2019): 47–52. http://dx.doi.org/10.21778/2218-5453-2019-4-47-52.

Full text
Abstract:
The problem of recognizing a human speech in the form of digits from one to ten recorded by dictaphone is considered. The method of the sound signal spectrogram recognition by means of convolutional neural networks is used. The algorithms for input data preliminary processing, networks training and words recognition are realized. The recognition accuracy for different number of convolution layers is estimated. Its number is determined and the structure of neural network is proposed. The comparison of recognition accuracy when the input data for the network is spectrogram or first two formants is carried out. The recognition algorithm is tested by male and female voices with different duration of pronunciation.
APA, Harvard, Vancouver, ISO, and other styles
22

Кonarev, D., and А. Gulamov. "ACCURACY IMPROVING OF PRE-TRAINED NEURAL NETWORKS BY FINE TUNING." EurasianUnionScientists 5, no. 1(82) (February 15, 2021): 26–28. http://dx.doi.org/10.31618/esu.2413-9335.2021.5.82.1231.

Full text
Abstract:
Methods of accuracy improving of pre-trained networks are discussed. Images of ships are input data for the networks. Networks are built and trained using Keras and TensorFlow machine learning libraries. Fine tuning of previously trained convoluted artificial neural networks for pattern recognition tasks is described. Fine tuning of VGG16 and VGG19 networks are done by using Keras Applications. The accuracy of VGG16 network with finetuning of the last convolution unit increased from 94.38% to 95.21%. An increase is only 0.83%. The accuracy of VGG19 network with fine-tuning of the last convolution unit increased from 92.97% to 96.39%, which is 3.42%.
APA, Harvard, Vancouver, ISO, and other styles
23

Geum, Young Hee, Arjun Kumar Rathie, and Hwajoon Kim. "Matrix Expression of Convolution and Its Generalized Continuous Form." Symmetry 12, no. 11 (October 29, 2020): 1791. http://dx.doi.org/10.3390/sym12111791.

Full text
Abstract:
In this paper, we consider the matrix expression of convolution, and its generalized continuous form. The matrix expression of convolution is effectively applied in convolutional neural networks, and in this study, we correlate the concept of convolution in mathematics to that in convolutional neural network. Of course, convolution is a main process of deep learning, the learning method of deep neural networks, as a core technology. In addition to this, the generalized continuous form of convolution has been expressed as a new variant of Laplace-type transform that, encompasses almost all existing integral transforms. Finally, we would, in this paper, like to describe the theoretical contents as detailed as possible so that the paper may be self-contained.
APA, Harvard, Vancouver, ISO, and other styles
24

Tian, Feng, Shiao Zhang, Miao Cao, and Xiaojun Huang. "Research on accelerated coding absorber design with deep learning." Physica Scripta 98, no. 9 (August 24, 2023): 096003. http://dx.doi.org/10.1088/1402-4896/acf00a.

Full text
Abstract:
Abstract The traditional design of metamaterials requires a large amount of prior knowledge in electromagnetism and is time-consuming and labour-intensive, but these challenges can be addressed by using trained neural networks to accelerate the forward design process. However, when it comes to coded absorbers, there is no clear ‘guidance manual’ on which neural network is most effective for this task. In this paper, three basic neural networks (full connection, one-dimensional convolution and two-dimensional convolution) are designed considering the apparent pattern and structural parameters of the coded absorber, trained under the same conditions, and evaluated for performance.The two-dimensional convolutional neural network achieved the highest accuracy on the test set, with an average accuracy of 92.37% and 70.3% of groups with accuracy greater than 95%. These results indicate that trained neural networks have great potential to approximate the functionality of traditional electromagnetic simulation software, and the two-dimensional convolutional neural network is the best choice for accelerating the forward design of coded absorbers.
APA, Harvard, Vancouver, ISO, and other styles
25

Gafarov, Fail, Andrey Berdnikov, and Pavel Ustin. "Online social network user performance prediction by graph neural networks." International Journal of Advances in Intelligent Informatics 8, no. 3 (November 30, 2022): 285. http://dx.doi.org/10.26555/ijain.v8i3.859.

Full text
Abstract:
Online social networks provide rich information that characterizes the user’s personality, his interests, hobbies, and reflects his current state. Users of social networks publish photos, posts, videos, audio, etc. every day. Online social networks (OSN) open up a wide range of research opportunities for scientists. Much research conducted in recent years using graph neural networks (GNN) has shown their advantages over conventional deep learning. In particular, the use of graph neural networks for online social network analysis seems to be the most suitable. In this article we studied the use of graph convolutional neural networks with different convolution layers (GCNConv, SAGEConv, GraphConv, GATConv, TransformerConv, GINConv) for predicting the user’s professional success in VKontakte online social network, based on data obtained from his profiles. We have used various parameters obtained from users’ personal pages in VKontakte social network (the number of friends, subscribers, interesting pages, etc.) as their features for determining the professional success, as well as networks (graphs) reflecting connections between users (followers/ friends). In this work we performed graph classification by using graph convolutional neural networks (with different types of convolution layers). The best accuracy of the graph convolutional neural network (0.88) was achieved by using the graph isomorphism network (GIN) layer. The results, obtained in this work, will serve for further studies of social success, based on metrics of personal profiles of OSN users and social graphs using neural network methods.
APA, Harvard, Vancouver, ISO, and other styles
26

Chimakurthi, Venkata Naga Satya Surendra. "Application of Convolution Neural Network for Digital Image Processing." Engineering International 8, no. 2 (December 31, 2020): 149—xxx. http://dx.doi.org/10.18034/ei.v8i2.592.

Full text
Abstract:
In order to train neural network algorithms for multiple machine learning tasks, like the division of distinct categories of objects, various deep learning approaches employ data. Convolutional neural networks deep learning algorithms are quite strong when it comes to image processing. With the recent development of multi-layer convolutional neural networks for high-level tasks like object recognition, object acquisition, and recent semantic classification, the field has seen great success in this approach. The two-phase approach is frequently employed in semantic segregation. In the second step of becoming a standard global graphical model, communication networks are educated to deliver good local intelligence with a pixel. Convolutional Neural Networks (CNN or ConvNet) are complicated neural server networks in the field of artificial intelligence. Because of their great accuracy, convolutional neural networks (CNNs) are frequently utilized in picture categorization and recognition. In the late 1990s, Yann LeCun, a computer scientist, was based on the human notion of cognition and came up with the idea. When constructing a network, CNN uses a hierarchical model that eventually results in a convolution layer in which all neurons are linked and output is processed. Using an example of an image processing application, this article demonstrates how the CNN architecture is implemented in its entirety. You can utilize this to better comprehend the advantages of this current photography website.
APA, Harvard, Vancouver, ISO, and other styles
27

Akbar, Mutaqin. "Traffic sign recognition using convolutional neural networks." Jurnal Teknologi dan Sistem Komputer 9, no. 2 (March 5, 2021): 120–25. http://dx.doi.org/10.14710/jtsiskom.2021.13959.

Full text
Abstract:
Traffic sign recognition (TSR) can be used to recognize traffic signs by utilizing image processing. This paper presents traffic sign recognition in Indonesia using convolutional neural networks (CNN). The overall image dataset used is 2050 images of traffic signs, consisting of 10 kinds of signs. The CNN layer used in this study consists of one convolution layer, one pooling layer using maxpool operation, and one fully connected layer. The training algorithm used is stochastic gradient descent (SGD). At the training stage, using 1750 training images, 48 filters, and a learning rate of 0.005, the recognition results in 0.005 of loss and 100 % of accuracy. At the testing stage using 300 test images, the system recognizes the signs with 0.107 of loss and 97.33 % of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
28

Syamala Rao, P., Dr G.P.SaradhiVarma, and Rajasekhar Mutukuri. "Effective and High Computing Algorithms for Convolution Neural Networks." International Journal of Engineering & Technology 7, no. 3.31 (August 24, 2018): 66. http://dx.doi.org/10.14419/ijet.v7i3.31.18203.

Full text
Abstract:
Training a large set of data takes GPU days using Deep convolution neural networks which are a time taking process. Self-driving cars require very low latency for pedestrian detection. Image recognition constrained by limited processing resources for mobile phones. The computation speed of the training set determines that in these situations convolution neural networks was a success. For large filters, Conventional Faster Fourier Transform based convolution is preferably fast, yet in case of small, 3 × 3 filters state of the art convolutional neural networks is used. By using Winograd's minimal filtering algorithms the new class of fast algorithms for convolutional neural networks was introduced by us. Instead of small tiles, minimal complexity convolution was computed by the algorithms, this increases the computing speed with small batch sizes and small filters. With the VGG network, we benchmark a GPU implementation of our algorithm and at batch sizes from 1 to 64 state of the art throughput was shown.
APA, Harvard, Vancouver, ISO, and other styles
29

Guo Congzhou, 郭从洲, 李可 Li Ke, 朱奕坤 Zhu Yikun, 童晓冲 Tong Xiaochong, and 王习文 Wang Xiwen. "文本图像倾斜角度检测的深度卷积神经网络方法." Laser & Optoelectronics Progress 58, no. 14 (2021): 1410007. http://dx.doi.org/10.3788/lop202158.1410007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Bunrit, Supaporn, Thuttaphol Inkian, Nittaya Kerdprasop, and Kittisak Kerdprasop. "Text-Independent Speaker Identification Using Deep Learning Model of Convolution Neural Network." International Journal of Machine Learning and Computing 9, no. 2 (April 2019): 143–48. http://dx.doi.org/10.18178/ijmlc.2019.9.2.778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Shi, Lin, and Lei Zheng. "An IGWOCNN Deep Method for Medical Education Quality Estimating." Mathematical Problems in Engineering 2022 (August 9, 2022): 1–5. http://dx.doi.org/10.1155/2022/9037726.

Full text
Abstract:
The deep learning and mining ability of big data are used to analyze the shortcomings in the teaching scheme, and the teaching scheme is optimized to improve the teaching ability. The convolution neural network optimized by improved grey wolf optimization is used to train the data so as to improve the efficiency of searching the optimal value of the algorithm and prevent the algorithm from tending to the local optimal value. In order to solve the shortcoming of grey wolf optimization, an improved grey wolf optimization, that is, grey wolf optimization with variable convergence factor, is used to optimize the convolution neural network. The grey wolf optimization with variable convergence factor is to balance the global search ability and local search ability of the algorithm. The testing results show that the quality estimating accuracy of convolutional neural networks optimized by improved grey wolf optimization is 100%, the quality estimating accuracy of convolutional neural networks optimized by grey wolf optimization is 93.33%, and the quality estimating accuracy of classical convolutional neural networks is 86.67%. We can conclude that the medical education quality estimating ability of convolutional neural network optimized by improved grey wolf optimization is the best among convolutional neural networks optimized by improved grey wolf optimization and classical convolutional neural networks.
APA, Harvard, Vancouver, ISO, and other styles
32

Srinivas, K., B. Kavitha Rani, M. Varaprasad Rao, G. Madhukar, and B. Venkata Ramana. "Convolution Neural Networks for Binary Classification." Journal of Computational and Theoretical Nanoscience 16, no. 11 (November 1, 2019): 4877–82. http://dx.doi.org/10.1166/jctn.2019.8399.

Full text
Abstract:
Convolutional neural networks (CNNs) are similar to “ordinary” neural networks in the sense that they are made up of hidden layers consisting of neurons with “learnable” parameters. These neurons receive inputs, perform a dot product, and then follows it with a non-linearity. The whole network expresses the mapping between raw image pixels and their class scores. Conventionally, the Softmax function is the classifier used at the last layer of this network. However, there have been studies conducted to challenge this norm. Empirical data has shown that the CNN model was able to achieve a test accuracy of ≈99.04% using the MNIST dataset. MNIST dataset consists of 60,000 training images and 10,000 testing images. This experiment was inspired by following the experiment on MNIST dataset. The dataset we used in this experiment is collection of images consisting of cats and dogs. These images are gathered from different sources over internet. This dataset consists of 10,000 images of each class i.e., Cats and Dogs. The overall accuracy of training and validation set is 96.85%. The said results may be improved if data pre-processing techniques were employed on the datasets, and if the base CNN model was alternatively more sophisticated than the one used in this study.
APA, Harvard, Vancouver, ISO, and other styles
33

Bass, L. P., Yu A. Plastinin, and I. Yu Skryabysheva. "The machine training in problems of satellite images’s processing." Metrologiya, no. 4 (2020): 15–37. http://dx.doi.org/10.32446/0132-4713.2020-4-15-37.

Full text
Abstract:
Use of the technical (computer) vision systems for Earth remote sensing is considered. An overview of software and hardware used in computer vision systems for processing satellite images is submitted. Algorithmic methods of the data processing with use of the trained neural network are described. Examples of the algorithmic processing of satellite images by means of artificial convolution neural networks are given. Ways of accuracy increase of satellite images recognition are defined. Practical applications of convolution neural networks onboard microsatellites for Earth remote sensing are presented.
APA, Harvard, Vancouver, ISO, and other styles
34

Wu, Chenxi, Rong Jiang, Xin Wu, Chao Zhong, and Caixia Huang. "A Time–Frequency Residual Convolution Neural Network for the Fault Diagnosis of Rolling Bearings." Processes 12, no. 1 (December 25, 2023): 54. http://dx.doi.org/10.3390/pr12010054.

Full text
Abstract:
A time–frequency residual convolution neural network (TFRCNN) was proposed to identify various rolling bearing fault types more efficiently. Three novel points about TFRCNN are presented as follows: First, by constructing a double-branch convolution network in the time domain and the frequency domain, the respective features in the time domain and the frequency domain were extracted to ensure the rich and complete feature representation of raw data sources. Second, specific residual structures were designed to prevent learning degradation of the deep network, and global average pooling was adopted to improve the network’s sparsity. Third, TFRCNN was better than the other models in terms of prediction accuracy, robustness, generalization ability, and convergence. The experimental results demonstrate that the prediction accuracy rate of TFRCNN, trained using mixing load data, reached 98.88 to 99.92% after optimizing the initial learning rate and choosing the optimizer and loss function. It was verified that TFRCNN can adaptively learn to extract deep fault features, accurately identify bearing fault conditions, and overcome the limitations of classical shallow feature extraction and classification methods, as well as common convolution neural networks. Hence, this investigation revealed TFRCNN’s potential for bearing fault diagnosis in practical engineering applications.
APA, Harvard, Vancouver, ISO, and other styles
35

Liao, Shengbin, Xiaofeng Wang, and ZongKai Yang. "A heterogeneous two-stream network for human action recognition." AI Communications 36, no. 3 (August 21, 2023): 219–33. http://dx.doi.org/10.3233/aic-220188.

Full text
Abstract:
The most widely used two-stream architectures and building blocks for human action recognition in videos generally consist of 2D or 3D convolution neural networks. 3D convolution can abstract motion messages between video frames, which is essential for video classification. 3D convolution neural networks usually obtain good performance compared with 2D cases, however it also increases computational cost. In this paper, we propose a heterogeneous two-stream architecture which incorporates two convolutional networks. One uses a mixed convolution network (MCN), which combines some 3D convolutions in the middle of 2D convolutions to train RGB frames, another one adopts BN-Inception network to train Optical Flow frames. Considering the redundancy of neighborhood video frames, we adopt a sparse sampling strategy to decrease the computational cost. Our architecture is trained and evaluated on the standard video actions benchmarks of HMDB51 and UCF101. Experimental results show our approach obtains the state-of-the-art performance on the datasets of HMDB51 (73.04%) and UCF101 (95.27%).
APA, Harvard, Vancouver, ISO, and other styles
36

Mohinabonu, Agzamova. "ENHANCING FACIAL RECOGNITION THROUGH CONTRASTIVE CONVOLUTION: A COMPREHENSIVE METHODOLOGY." American Journal of Engineering and Technology 5, no. 11 (November 1, 2023): 105–14. http://dx.doi.org/10.37547/tajet/volume05issue11-15.

Full text
Abstract:
This study presents an innovative approach to enhance facial recognition technology using contrastive convolutional neural networks (CNNs). The primary focus is on improving the accuracy and efficiency of face recognition systems under varying conditions. Key elements of this approach include meticulous data preparation and preprocessing, where images undergo normalization and diverse augmentation techniques to ensure quality inputs. The network architecture is designed to process pairs of face images, utilizing a common feature extractor and cascaded convolution layers for detailed feature representation. A specialized kernel generator further refines the process, emphasizing unique facial characteristics. The core of the training regimen is a contrastive loss function, optimized through gradient descent to enhance the network's discriminatory capabilities. Results from the study demonstrate a significant improvement in recognition accuracy, particularly highlighted by the superior performance of the proposed model in comparison to standard facial recognition algorithms. This research provides a comprehensive methodology that could revolutionize face recognition technology, offering more reliable and efficient solutions for various applications.
APA, Harvard, Vancouver, ISO, and other styles
37

Pan, Yumin. "Different Types of Neural Networks and Applications: Evidence from Feedforward, Convolutional and Recurrent Neural Networks." Highlights in Science, Engineering and Technology 85 (March 13, 2024): 247–55. http://dx.doi.org/10.54097/6rn1wd81.

Full text
Abstract:
Neural networks have achieved great process in the 90 years since they were officially introduced in 1943. Because of its wide application and huge research and development potential, this technology attracts more and more scientific and technological workers to the research of neural networks. Neural network technology is an essential component of AI development, and it is a significant indicator of a country's overall strength. In this paper, this study will demonstrate Feedforward Neural Network, Convolution Neural Network and Recurrent Neural networks and evaluate them through datasets from kaggle.com. and CSDN (China IT community). Through this paper, readers can have a better outlook and understanding of the operating principles of each type of neural network as well as their specific jobs (what kind of jobs they specialized in) and each application of these neural networks. So that this paper can promote readers' thoughts and help them start learning neural networks or be a supplement or reference for future scholars. In the end, this paper will present the outcome, which is the evaluation of the accuracy, loss curve, and accuracy curve of neural networks.
APA, Harvard, Vancouver, ISO, and other styles
38

Hu, Kejian, and Xiaoguang Wu. "A Bridge Structure 3D Representation for Deep Neural Network and Its Application in Frequency Estimation." Advances in Civil Engineering 2022 (March 22, 2022): 1–13. http://dx.doi.org/10.1155/2022/1999013.

Full text
Abstract:
Currently, most predictions related to bridge geometry use shallow neural networks, which limit the network’s ability to fit since the input form limits the depth of the neural network. Therefore, this study proposed a new 3D representation of bridge structures. Based on the geometric parameters of the bridge structure, three 4D tensors were formed. This form of representation not only retained all geometric information but also expressed the spatial relationship of the structure. Then, this study constructed the corresponding 3D convolutional neural network and used it to estimate the frequency of the bridge. In addition, this study also developed a traditional shallow neural network for comparison. The application of 3D representation and 3D convolution could effectively reduce the prediction error. The 3D representation presented in this study could be used not only for frequency prediction but also for any prediction problems related to bridge geometry.
APA, Harvard, Vancouver, ISO, and other styles
39

Sapunov, V. V., S. A. Botman, G. V. Kamyshov, and N. N. Shusharina. "Application of Convolution with Periodic Boundary Condition for Processing Data from Cylindrical Electrode Arrays." INFORMACIONNYE TEHNOLOGII 27, no. 3 (March 15, 2021): 125–31. http://dx.doi.org/10.17587/it.27.125-131.

Full text
Abstract:
In this paper, modification of convolutional neural networks for purposes of processing electromyographic data obtained from cylindrical arrays of electrodes was proposed. Taking into account the spatial symmetry of the array, convolution operation was redefined using periodic boundary conditions, which allowed to construct a neural network that is invariant to rotations of electrodes array around its axis. Applicability of the proposed approach was evaluated by constructing a neural network containing a new type of convolutional layer and training it on the open UC2018 DualMyo dataset in order to classify gestures basing on data from a single myobracelet. The network based on the new type of convolution performed better compared to common convolutions when trained on data without augmentation, which indicates that such a network is invari­able to cyclic shifts in the input data. Neural networks with modified convolutional layers and common convolutional layers achieved f-1 scores of 0.96 and 0.65 respectively with no augmentation for input data and f-1 scores of 0.98 and 0.96 in case when train-time augmentation was applied. Test data was augmented in both cases. Potentially, proposed convolution can be applied in processing any data with the same connectivity in such a way that allows to adapt time-tested architectural solutions for networks by replacing common convolutions with modified ones.
APA, Harvard, Vancouver, ISO, and other styles
40

Vazquez, Napoli R., Dan P. Fernandes, and Daniel H. Chen. "Control Valve Stiction: Experimentation, Modeling, Model Validation and Detection with Convolution Neural Network." International Journal of Chemical Engineering and Applications 10, no. 6 (December 2019): 195–99. http://dx.doi.org/10.18178/ijcea.2019.10.6.768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Xu Mingzhu, 许明珠, 徐浩 Xu Hao, 孔鹏 Kong Peng, and 吴艳兰 Wu Yanlan. "结合植被指数和卷积神经网络的遥感植被分类方法." Laser & Optoelectronics Progress 59, no. 24 (2022): 2428005. http://dx.doi.org/10.3788/lop202259.2428005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Rustam, Rustam, Rita Noveriza, Siti Khotijah, Syamsul Rizal, Melati Melati, Nor Kumalasari Caecar Pratiwi, Muhammad Hablul Barri, and Koredianto Usman. "Convolution Neural Network Approach for Early Identification of Patchouli Leaf Disease in IndonesiaConvolution Neural Network Approach for Early Identification of Patchouli Leaf Disease in Indonesia." Journal of Image and Graphics 12, no. 2 (2024): 137–44. http://dx.doi.org/10.18178/joig.12.2.137-144.

Full text
Abstract:
Indonesia is the largest supplier of patchouli oil in the world market, contributing 80%–90%. Most patchouli oil products are exported in the perfume, cosmetics, pharmaceutical, antiseptic, aromatherapy, and insecticide industries. The emergence of patchouli leaf disease significantly reduced the production of wet, dry, oil, and patchouli alcohol. Therefore, selecting patchouli cuttings (seedlings) that are entirely healthy and disease-free is very important to prevent disease transmission from one area to another. In addition, the selection of disease-free seeds is also essential to prevent the use of diseased patchouli plant propagation. So far, the early identification of patchouli plant health is carried out through visual observations by experts using antiviral serum tested in the laboratory. However, this testing process is expensive. Therefore, in this paper, we proposed a novel Convolutional Neural Network (CNN) architecture for patchouli leaf diseases. We proposed a system for early identification of whether a patchouli leaf is diseased or healthy. Our CNN model uses three convolution layers, a dense layer, and a dropout layer. We compare the proposed model with well-known models, namely EfficientNetB0, AlexNet, InceptionV3, MobileNetV2, and VGG16. The results show that the proposed model outperformed five well-known models as a comparison. It has been confirmed by predicting the new and different testing data. This research contributes to the early identification of patchouli leaf diseases to reduce the expensive costs of identifying patchouli leaf diseases.
APA, Harvard, Vancouver, ISO, and other styles
43

Yang, Liming, Yihang Yang, Jinghui Yang, Ningyuan Zhao, Ling Wu, Liguo Wang, and Tianrui Wang. "FusionNet: A Convolution–Transformer Fusion Network for Hyperspectral Image Classification." Remote Sensing 14, no. 16 (August 19, 2022): 4066. http://dx.doi.org/10.3390/rs14164066.

Full text
Abstract:
In recent years, deep-learning-based hyperspectral image (HSI) classification networks have become one of the most dominant implementations in HSI classification tasks. Among these networks, convolutional neural networks (CNNs) and attention-based networks have prevailed over other HSI classification networks. While convolutional neural networks with perceptual fields can effectively extract local features in the spatial dimension of HSI, they are poor at capturing the global and sequential features of spectral–spatial information; networks based on attention mechanisms, for example, Transformer, usually have better ability to capture global features, but are relatively weak in discriminating local features. This paper proposes a fusion network of convolution and Transformer for HSI classification, known as FusionNet, in which convolution and Transformer are fused in both serial and parallel mechanisms to achieve the full utilization of HSI features. Experimental results demonstrate that the proposed network has superior classification results compared to previous similar networks, and performs relatively well even on a small amount of training data.
APA, Harvard, Vancouver, ISO, and other styles
44

Yan, Peizhi, and Yi Feng. "Using Convolution and Deep Learning in Gomoku Game Artificial Intelligence." Parallel Processing Letters 28, no. 03 (September 2018): 1850011. http://dx.doi.org/10.1142/s0129626418500111.

Full text
Abstract:
Gomoku is an ancient board game. The traditional approach to solving the Gomoku game is to apply tree search on a Gomoku game tree. Although the rules of Gomoku are straightforward, the game tree complexity is enormous. Unlike many other board games such as chess and Shogun, the Gomoku board state is more intuitive. That is to say, analyzing the visual patterns on a Gomoku game board is fundamental to play this game. In this paper, we designed a deep convolutional neural network model to help the machine learn from the training data (collected from human players). Based on this original neural network model, we made some changes and get two variant neural networks. We compared the performance of the original neural network with its variants in our experiments. Our original neural network model got 69% accuracy on the training data and 38% accuracy on the testing data. Because the decision made by the neural network is intuitive, we also designed a hard-coded convolution-based Gomoku evaluation function to assist the neural network in making decisions. This hybrid Gomoku artificial intelligence (AI) further improved the performance of a pure neural network-based Gomoku AI.
APA, Harvard, Vancouver, ISO, and other styles
45

Wan, Renzhuo, Shuping Mei, Jun Wang, Min Liu, and Fan Yang. "Multivariate Temporal Convolutional Network: A Deep Neural Networks Approach for Multivariate Time Series Forecasting." Electronics 8, no. 8 (August 7, 2019): 876. http://dx.doi.org/10.3390/electronics8080876.

Full text
Abstract:
Multivariable time series prediction has been widely studied in power energy, aerology, meteorology, finance, transportation, etc. Traditional modeling methods have complex patterns and are inefficient to capture long-term multivariate dependencies of data for desired forecasting accuracy. To address such concerns, various deep learning models based on Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) methods are proposed. To improve the prediction accuracy and minimize the multivariate time series data dependence for aperiodic data, in this article, Beijing PM2.5 and ISO-NE Dataset are analyzed by a novel Multivariate Temporal Convolution Network (M-TCN) model. In this model, multi-variable time series prediction is constructed as a sequence-to-sequence scenario for non-periodic datasets. The multichannel residual blocks in parallel with asymmetric structure based on deep convolution neural network is proposed. The results are compared with rich competitive algorithms of long short term memory (LSTM), convolutional LSTM (ConvLSTM), Temporal Convolution Network (TCN) and Multivariate Attention LSTM-FCN (MALSTM-FCN), which indicate significant improvement of prediction accuracy, robust and generalization of our model.
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Wei, Yanjie Zhu, Zhuoxu Cui, and Dong Liang. "Is Each Layer Non-trivial in CNN? (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (May 18, 2021): 15915–16. http://dx.doi.org/10.1609/aaai.v35i18.17954.

Full text
Abstract:
Convolutional neural network (CNN) models have achieved great success in many fields. With the advent of ResNet, networks used in practice are getting deeper and wider. However, is each layer non-trivial in networks? To answer this question, we trained a network on the training set, then we replace the network convolution kernels with zeros and test the result models on the test set. We compared experimental results with baseline and showed that we can reach similar or even the same performances. Although convolution kernels are the cores of networks, we demonstrate that some of them are trivial and regular in ResNet.
APA, Harvard, Vancouver, ISO, and other styles
47

Arsirii, Olena О., and Denys V. Petrosiuk. "An adaptive convolutional neural network model for human facial expression recognition." Herald of Advanced Information Technology 6, no. 2 (July 3, 2023): 128–38. http://dx.doi.org/10.15276/hait.06.2023.8.

Full text
Abstract:
The relevance of solving the problem of recognizing facial expressions in the image of a person's face for the formation of amodel of social interactions in the development of intelligent systems for computer vision, human-machine interaction, online learning, emotional marketing, and game intelligence is shown. The aim of the work is to reduce the training time and computational resources without losing the reliability of the multivalued classification of motor units for solving the problem of facial expression recognition in a human face image by developing an adaptive model of a convolution neural network and a method for its training with “fine tuning” of parameters. To achieve the goal, several tasks were solved in the work. Models of specialized convolution neural networks and pre-trained on the ImageNet set were investigated. The stages of transfer learning of convolution neural networks were shown. A model of a convolutionalneural network and a method for its training were developed to solve the problems of facial expression recognition on a human face image. The reliability of recognition of motor units was analyzed based on the developed adaptive model of a convolution neural network and the method of its transfer learning. It is shown that, on average, the use of the proposed loss function in a fully connected layer of a multi-valued motor unit classifier within the framework of the developed adaptive model of a convolution neural network based on the publicly available MobileNet-v1 and its transfer learning method made it possible to increase the reliability of solving the problem of facial expression recognition inahuman face image by 6 % by F1 value estimation.
APA, Harvard, Vancouver, ISO, and other styles
48

Gu, Yafeng, and Li Deng. "STAGCN: Spatial–Temporal Attention Graph Convolution Network for Traffic Forecasting." Mathematics 10, no. 9 (May 8, 2022): 1599. http://dx.doi.org/10.3390/math10091599.

Full text
Abstract:
Traffic forecasting plays an important role in intelligent transportation systems. However, the prediction task is highly challenging due to the mixture of global and local spatiotemporal dependencies involved in traffic data. Existing graph neural networks (GNNs) typically capture spatial dependencies with the predefined or learnable static graph structure, ignoring the hidden dynamic patterns in traffic networks. Meanwhile, most recurrent neural networks (RNNs) or convolutional neural networks (CNNs) cannot effectively capture temporal correlations, especially for long-term temporal dependencies. In this paper, we propose a spatial–temporal attention graph convolution network (STAGCN), which acquires a static graph and a dynamic graph from data without any prior knowledge. The static graph aims to model global space adaptability, and the dynamic graph is designed to capture local dynamics in the traffic network. A gated temporal attention module is further introduced for long-term temporal dependencies, where a causal-trend attention mechanism is proposed to increase the awareness of causality and local trends in time series. Extensive experiments on four real-world traffic flow datasets demonstrate that STAGCN achieves an outstanding prediction accuracy improvement over existing solutions.
APA, Harvard, Vancouver, ISO, and other styles
49

Gao, Yuan, Laurence T. Yang, Dehua Zheng, Jing Yang, and Yaliang Zhao. "Quantized Tensor Neural Network." ACM/IMS Transactions on Data Science 2, no. 4 (November 30, 2021): 1–18. http://dx.doi.org/10.1145/3491255.

Full text
Abstract:
Tensor network as an effective computing framework for efficient processing and analysis of high-dimensional data has been successfully applied in many fields. However, the performance of traditional tensor networks still cannot match the strong fitting ability of neural networks, so some data processing algorithms based on tensor networks cannot achieve the same excellent performance as deep learning models. To further improve the learning ability of tensor network, we propose a quantized tensor neural network in this article (QTNN), which integrates the advantages of neural networks and tensor networks, namely, the powerful learning ability of neural networks and the simplicity of tensor networks. The QTNN model can be further regarded as a generalized multilayer nonlinear tensor network, which can efficiently extract low-dimensional features of the data while maintaining the original structure information. In addition, to more effectively represent the local information of data, we introduce multiple convolution layers in QTNN to extract the local features. We also develop a high-order back-propagation algorithm for training the parameters of QTNN. We conducted classification experiments on multiple representative datasets to further evaluate the performance of proposed models, and the experimental results show that QTNN is simpler and more efficient while compared to the classic deep learning models.
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Yong, Luping Wang, and Fen Liu. "Multi-Branch Attention-Based Grouped Convolution Network for Human Activity Recognition Using Inertial Sensors." Electronics 11, no. 16 (August 12, 2022): 2526. http://dx.doi.org/10.3390/electronics11162526.

Full text
Abstract:
Recently, deep neural networks have become a widely used technology in the field of sensor-based human activity recognition and they have achieved good results. However, some convolutional neural networks lack further selection for the extracted features, or the networks cannot process the sensor data from different locations of the body independently and in parallel. Therefore, the accuracy of existing networks is not ideal. In particular, similar activities are easy to be confused, which limits the application of sensor-based HAR. In this paper, we propose a multi-branch neural network based on attention-based convolution. Each branch of the network consists of two layers of attention-based grouped convolution submodules. We introduce a dual attention mechanism that consists of channel attention and spatial attention to select the most important features. Sensor data collected at different positions of the human body are separated and fed into different network branches for training and testing independently. Finally, the multi-branch features are fused. We test the proposed network on three large datasets: PAMAP2, UT, and OPPORTUNITY. The experiment results show that our method outperforms the existing state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography