Journal articles on the topic 'Sparse Deep Learning'

To see the other types of publications on this topic, follow the link: Sparse Deep Learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sparse Deep Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chai, Xintao, Genyang Tang, Kai Lin, Zhe Yan, Hanming Gu, Ronghua Peng, Xiaodong Sun, and Wenjun Cao. "Deep learning for multitrace sparse-spike deconvolution." GEOPHYSICS 86, no. 3 (April 8, 2021): V207—V218. http://dx.doi.org/10.1190/geo2020-0342.1.

Full text
Abstract:
Sparse-spike deconvolution (SSD) is an important method for seismic resolution enhancement. With the wavelet given, many trace-by-trace SSD methods have been proposed for extracting an estimate of the reflection-coefficient series from stacked traces. The main drawbacks of trace-by-trace methods are that they neither use the information from the adjacent seismograms nor do they take full advantage of the inherent spatial continuity of the seismic data. Although several multitrace methods have been consequently proposed, these methods generally rely on different assumptions and theories and require different parameter settings for different data applications. Therefore, traditional methods demand intensive human-computer interaction. This requirement undoubtedly does not fit the current dominant trend of intelligent seismic exploration. Therefore, we have developed a deep learning (DL)-based multitrace SSD approach. The approach transforms the input 2D/3D seismic data into the corresponding SSD result by training end-to-end encoder-decoder-style 2D/3D convolutional neural networks (CNN). Our key motivations are that DL is effective for mining complicated relations from data, the 2D/3D CNN can take multitrace information into account naturally, the additional information contributes to the SSD result with better spatial continuity, and parameter tuning is not necessary for CNN predictions. We determine the significance of the learning rate for the training process’s convergence. Benchmarking tests on the field 2D/3D seismic data confirm that the approach yields accurate high-resolution results that are mostly in agreement with the well logs, the DL-based multitrace SSD results generated by the 2D/3D CNNs are better than the trace-by-trace SSD results, and the 3D CNN outperforms the 2D CNN for 3D data application.
APA, Harvard, Vancouver, ISO, and other styles
2

Kerrigan, Joshua, Paul La Plante, Saul Kohn, Jonathan C. Pober, James Aguirre, Zara Abdurashidova, Paul Alexander, et al. "Optimizing sparse RFI prediction using deep learning." Monthly Notices of the Royal Astronomical Society 488, no. 2 (July 8, 2019): 2605–15. http://dx.doi.org/10.1093/mnras/stz1865.

Full text
Abstract:
ABSTRACT Radio frequency interference (RFI) is an ever-present limiting factor among radio telescopes even in the most remote observing locations. When looking to retain the maximum amount of sensitivity and reduce contamination for Epoch of Reionization studies, the identification and removal of RFI is especially important. In addition to improved RFI identification, we must also take into account computational efficiency of the RFI-Identification algorithm as radio interferometer arrays such as the Hydrogen Epoch of Reionization Array (HERA) grow larger in number of receivers. To address this, we present a deep fully convolutional neural network (DFCN) that is comprehensive in its use of interferometric data, where both amplitude and phase information are used jointly for identifying RFI. We train the network using simulated HERA visibilities containing mock RFI, yielding a known ‘ground truth’ data set for evaluating the accuracy of various RFI algorithms. Evaluation of the DFCN model is performed on observations from the 67 dish build-out, HERA-67, and achieves a data throughput of 1.6 × 105 HERA time-ordered 1024 channelled visibilities per hour per GPU. We determine that relative to an amplitude only network including visibility phase adds important adjacent time–frequency context which increases discrimination between RFI and non-RFI. The inclusion of phase when predicting achieves a recall of 0.81, precision of 0.58, and F2 score of 0.75 as applied to our HERA-67 observations.
APA, Harvard, Vancouver, ISO, and other styles
3

De Cnudde, Sofie, Yanou Ramon, David Martens, and Foster Provost. "Deep Learning on Big, Sparse, Behavioral Data." Big Data 7, no. 4 (December 1, 2019): 286–307. http://dx.doi.org/10.1089/big.2019.0095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Davoudi, Neda, Xosé Luís Deán-Ben, and Daniel Razansky. "Deep learning optoacoustic tomography with sparse data." Nature Machine Intelligence 1, no. 10 (September 16, 2019): 453–60. http://dx.doi.org/10.1038/s42256-019-0095-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Trampert, Patrick, Sabine Schlabach, Tim Dahmen, and Philipp Slusallek. "Deep Learning for Sparse Scanning Electron Microscopy." Microscopy and Microanalysis 25, S2 (August 2019): 158–59. http://dx.doi.org/10.1017/s1431927619001521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tanuja, Nukapeyyi. "Medical Image Fusion Using Deep Learning Mechanism." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (January 31, 2022): 128–36. http://dx.doi.org/10.22214/ijraset.2022.39809.

Full text
Abstract:
Abstract: Sparse representation(SR) model named convolutional sparsity based morphological component analysis is introduced for pixel-level medical image fusion. The CS-MCA model can achieve multicomponent and global SRs of source images, by integrating MCA and convolutional sparse representation(CSR) into a unified optimization framework. In the existing method, the CSRs of its gradient and texture components are obtained by the CSMCA model using pre-learned dictionaries. Then for each image component, sparse coefficients of all the source images are merged and then fused component is reconstructed using the corresponding dictionary. In the extension mechanism, we are using deep learning based pyramid decomposition. Now a days deep learning is a very demanding technology. Deep learning is used for image classification, object detection, image segmentation, image restoration. Keywords: CNN, CT, MRI, MCA, CS-MCA.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhou, Hongpeng, Chahine Ibrahim, Wei Xing Zheng, and Wei Pan. "Sparse Bayesian deep learning for dynamic system identification." Automatica 144 (October 2022): 110489. http://dx.doi.org/10.1016/j.automatica.2022.110489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Xing, and Lei Zhang. "Unbalanced data processing using deep sparse learning technique." Future Generation Computer Systems 125 (December 2021): 480–84. http://dx.doi.org/10.1016/j.future.2021.05.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Antholzer, Stephan, Markus Haltmeier, and Johannes Schwab. "Deep learning for photoacoustic tomography from sparse data." Inverse Problems in Science and Engineering 27, no. 7 (September 11, 2018): 987–1005. http://dx.doi.org/10.1080/17415977.2018.1518444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xie, Weicheng, Xi Jia, Linlin Shen, and Meng Yang. "Sparse deep feature learning for facial expression recognition." Pattern Recognition 96 (December 2019): 106966. http://dx.doi.org/10.1016/j.patcog.2019.106966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zeng, Nianyin, Hong Zhang, Baoye Song, Weibo Liu, Yurong Li, and Abdullah M. Dobaie. "Facial expression recognition via learning deep sparse autoencoders." Neurocomputing 273 (January 2018): 643–49. http://dx.doi.org/10.1016/j.neucom.2017.08.043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Cheng, Xiangyi, Huaping Liu, Xinying Xu, and Fuchun Sun. "Denoising deep extreme learning machine for sparse representation." Memetic Computing 9, no. 3 (April 12, 2016): 199–212. http://dx.doi.org/10.1007/s12293-016-0185-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Charalampous, K., I. Kostavelis, A. Amanatiadis, and A. Gasteratos. "Sparse deep-learning algorithm for recognition and categorisation." Electronics Letters 48, no. 20 (2012): 1265. http://dx.doi.org/10.1049/el.2012.1033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Jun-e., and Feng-Ping An. "Image Classification Algorithm Based on Deep Learning-Kernel Function." Scientific Programming 2020 (January 31, 2020): 1–14. http://dx.doi.org/10.1155/2020/7607612.

Full text
Abstract:
Although the existing traditional image classification methods have been widely applied in practical problems, there are some problems in the application process, such as unsatisfactory effects, low classification accuracy, and weak adaptive ability. This method separates image feature extraction and classification into two steps for classification operation. The deep learning model has a powerful learning ability, which integrates the feature extraction and classification process into a whole to complete the image classification test, which can effectively improve the image classification accuracy. However, this method has the following problems in the application process: first, it is impossible to effectively approximate the complex functions in the deep learning model. Second, the deep learning model comes with a low classifier with low accuracy. So, this paper introduces the idea of sparse representation into the architecture of the deep learning network and comprehensively utilizes the sparse representation of well multidimensional data linear decomposition ability and the deep structural advantages of multilayer nonlinear mapping to complete the complex function approximation in the deep learning model. And a sparse representation classification method based on the optimized kernel function is proposed to replace the classifier in the deep learning model, thereby improving the image classification effect. Therefore, this paper proposes an image classification algorithm based on the stacked sparse coding depth learning model-optimized kernel function nonnegative sparse representation. The experimental results show that the proposed method not only has a higher average accuracy than other mainstream methods but also can be good adapted to various image databases. Compared with other deep learning methods, it can better solve the problems of complex function approximation and poor classifier effect, thus further improving image classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, Chunying, Yuwen Huang, Fuxian Huang, and Jijiang Yu. "Multifeature Deep Cascaded Learning for PPG Biometric Recognition." Scientific Programming 2022 (March 7, 2022): 1–8. http://dx.doi.org/10.1155/2022/7477746.

Full text
Abstract:
Aiming at the problem that the traditional photoplethysmography (PPG) biometric recognition based on sparse representation is not robust to noise and intraclass variations when the sample size is small, we propose a PPG biometric recognition method based on multifeature deep cascaded sparse representation (MFDCSR). The method consists of multifeature signal coding and deep cascaded coding. The function of multifeature signal coding is to extract the shape, wavelet, and principal component analysis features of the PPG signal and to perform sparse representation. Deep cascaded coding is multilayer feature coding. Each layer combines multifeature signal coding with the result of the previous layer as input, and the output of each layer is the input of the next layer. The function of deep cascade coding is to learn the features of the PPG signal, layer by layer, and to output the category distribution vector of the PPG signal in the last layer. Experiments demonstrate that MFDCSR has better recognition performance than current methods for PPG biometric recognition.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhu, Yi, Lei Li, and Xindong Wu. "Stacked Convolutional Sparse Auto-Encoders for Representation Learning." ACM Transactions on Knowledge Discovery from Data 15, no. 2 (April 2021): 1–21. http://dx.doi.org/10.1145/3434767.

Full text
Abstract:
Deep learning seeks to achieve excellent performance for representation learning in image datasets. However, supervised deep learning models such as convolutional neural networks require a large number of labeled image data, which is intractable in applications, while unsupervised deep learning models like stacked denoising auto-encoder cannot employ label information. Meanwhile, the redundancy of image data incurs performance degradation on representation learning for aforementioned models. To address these problems, we propose a semi-supervised deep learning framework called stacked convolutional sparse auto-encoder, which can learn robust and sparse representations from image data with fewer labeled data records. More specifically, the framework is constructed by stacking layers. In each layer, higher layer feature representations are generated by features of lower layers in a convolutional way with kernels learned by a sparse auto-encoder. Meanwhile, to solve the data redundance problem, the algorithm of Reconstruction Independent Component Analysis is designed to train on patches for sphering the input data. The label information is encoded using a Softmax Regression model for semi-supervised learning. With this framework, higher level representations are learned by layers mapping from image data. It can boost the performance of the base subsequent classifiers such as support vector machines. Extensive experiments demonstrate the superior classification performance of our framework compared to several state-of-the-art representation learning methods.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Xu, Wei Huang, Jing Gao, Dapeng Wang, Changchuan Bai, and Zhikui Chen. "Deep sparse transfer learning for remote smart tongue diagnosis." Mathematical Biosciences and Engineering 18, no. 2 (2021): 1169–86. http://dx.doi.org/10.3934/mbe.2021063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kim, MyeongSeop, and Jung-Su Kim. "Policy-based Deep Reinforcement Learning for Sparse Reward Environment." Transactions of The Korean Institute of Electrical Engineers 70, no. 3 (March 31, 2021): 506–14. http://dx.doi.org/10.5370/kiee.2021.70.3.506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

WU, Renjie, and Sei-ichiro KAMATA. "Sparse Graph Based Deep Learning Networks for Face Recognition." IEICE Transactions on Information and Systems E101.D, no. 9 (September 1, 2018): 2209–19. http://dx.doi.org/10.1587/transinf.2017pcp0012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Wandale, Steven, and Koichi Ichige. "Simulated Annealing Assisted Sparse Array Selection Utilizing Deep Learning." IEEE Access 9 (2021): 156907–14. http://dx.doi.org/10.1109/access.2021.3129856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Nascimento, Jacinto C., and Gustavo Carneiro. "Deep Learning on Sparse Manifolds for Faster Object Segmentation." IEEE Transactions on Image Processing 26, no. 10 (October 2017): 4978–90. http://dx.doi.org/10.1109/tip.2017.2725582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kechagias-Stamatis, Odysseas, and Nabil Aouf. "Fusing Deep Learning and Sparse Coding for SAR ATR." IEEE Transactions on Aerospace and Electronic Systems 55, no. 2 (April 2019): 785–97. http://dx.doi.org/10.1109/taes.2018.2864809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ma, Rongrong, Jianyu Miao, Lingfeng Niu, and Peng Zhang. "Transformed ℓ1 regularization for learning sparse deep neural networks." Neural Networks 119 (November 2019): 286–98. http://dx.doi.org/10.1016/j.neunet.2019.08.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wenying, Wang, Wei Yao, Zhen Xuanxuan, Yu Hui, and Wang Ruqi. "Classifying aircraft based on sparse recovery and deep-learning." Journal of Engineering 2019, no. 21 (November 1, 2019): 7464–68. http://dx.doi.org/10.1049/joe.2019.0633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Zhao, Jin, and Licheng Jiao. "Sparse Deep Tensor Extreme Learning Machine for Pattern Classification." IEEE Access 7 (2019): 119181–91. http://dx.doi.org/10.1109/access.2019.2924647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Yang, Shuyuan, Quanwei Gao, and Shigang Wang. "Learning a Deep Representative Saliency Map With Sparse Tensors." IEEE Access 7 (2019): 117861–70. http://dx.doi.org/10.1109/access.2019.2931921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Lee, Sangkyun, and Jeonghyun Lee. "Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems." Applied Sciences 9, no. 8 (April 23, 2019): 1669. http://dx.doi.org/10.3390/app9081669.

Full text
Abstract:
Deep neural networks (DNNs) have been quite successful in solving many complex learning problems. However, DNNs tend to have a large number of learning parameters, leading to a large memory and computation requirement. In this paper, we propose a model compression framework for efficient training and inference of deep neural networks on embedded systems. Our framework provides data structures and kernels for OpenCL-based parallel forward and backward computation in a compressed form. In particular, our method learns sparse representations of parameters using ℓ 1 -based sparse coding while training, storing them in compressed sparse matrices. Unlike the previous works, our method does not require a pre-trained model as an input and therefore can be more versatile for different application environments. Even though the use of ℓ 1 -based sparse coding for model compression is not new, we show that it can be far more effective than previously reported when we use proximal point algorithms and the technique of debiasing. Our experiments show that our method can produce minimal learning models suitable for small embedded devices.
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Hongmei, Lianfeng Li, and Jian Ma. "Rolling Bearing Fault Diagnosis Based on STFT-Deep Learning and Sound Signals." Shock and Vibration 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/6127479.

Full text
Abstract:
The main challenge of fault diagnosis lies in finding good fault features. A deep learning network has the ability to automatically learn good characteristics from input data in an unsupervised fashion, and its unique layer-wise pretraining and fine-tuning using the backpropagation strategy can solve the difficulties of training deep multilayer networks. Stacked sparse autoencoders or other deep architectures have shown excellent performance in speech recognition, face recognition, text classification, image recognition, and other application domains. Thus far, however, there have been very few research studies on deep learning in fault diagnosis. In this paper, a new rolling bearing fault diagnosis method that is based on short-time Fourier transform and stacked sparse autoencoder is first proposed; this method analyzes sound signals. After spectrograms are obtained by short-time Fourier transform, stacked sparse autoencoder is employed to automatically extract the fault features, and softmax regression is adopted as the method for classifying the fault modes. The proposed method, when applied to sound signals that are obtained from a rolling bearing test rig, is compared with empirical mode decomposition, Teager energy operator, and stacked sparse autoencoder when using vibration signals to verify the performance and effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Junjie, and Xinghua Shi. "Sparse Convolutional Denoising Autoencoders for Genotype Imputation." Genes 10, no. 9 (August 28, 2019): 652. http://dx.doi.org/10.3390/genes10090652.

Full text
Abstract:
Genotype imputation, where missing genotypes can be computationally imputed, is an essential tool in genomic analysis ranging from genome wide associations to phenotype prediction. Traditional genotype imputation methods are typically based on haplotype-clustering algorithms, hidden Markov models (HMMs), and statistical inference. Deep learning-based methods have been recently reported to suitably address the missing data problems in various fields. To explore the performance of deep learning for genotype imputation, in this study, we propose a deep model called a sparse convolutional denoising autoencoder (SCDA) to impute missing genotypes. We constructed the SCDA model using a convolutional layer that can extract various correlation or linkage patterns in the genotype data and applying a sparse weight matrix resulted from the L1 regularization to handle high dimensional data. We comprehensively evaluated the performance of the SCDA model in different scenarios for genotype imputation on the yeast and human genotype data, respectively. Our results showed that SCDA has strong robustness and significantly outperforms popular reference-free imputation methods. This study thus points to another novel application of deep learning models for missing data imputation in genomic studies.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhao, Siyuan, Jiacheng Ni, Jia Liang, Shichao Xiong, and Ying Luo. "End-to-End SAR Deep Learning Imaging Method Based on Sparse Optimization." Remote Sensing 13, no. 21 (November 3, 2021): 4429. http://dx.doi.org/10.3390/rs13214429.

Full text
Abstract:
Synthetic aperture radar (SAR) imaging has developed rapidly in recent years. Although the traditional sparse optimization imaging algorithm has achieved effective results, its shortcomings are slow imaging speed, large number of parameters, and high computational complexity. To solve the above problems, an end-to-end SAR deep learning imaging algorithm is proposed. Based on the existing SAR sparse imaging algorithm, the SAR imaging model is first rewritten to the SAR complex signal form based on the real-value model. Second, instead of arranging the two-dimensional echo data into a vector to continuously construct an observation matrix, the algorithm only derives the neural network imaging model based on the iteration soft threshold algorithm (ISTA) sparse algorithm in the two-dimensional data domain, and then reconstructs the observation scene through the superposition and expansion of the multi-layer network. Finally, through the experiment of simulation data and measured data of the three targets, it is verified that our algorithm is superior to the traditional sparse algorithm in terms of imaging quality, imaging time, and the number of parameters.
APA, Harvard, Vancouver, ISO, and other styles
31

Du, Wenliao, Shuangyuan Wang, Xiaoyun Gong, Hongchao Wang, Xingyan Yao, and Michael Pecht. "Translation Invariance-Based Deep Learning for Rotating Machinery Diagnosis." Shock and Vibration 2020 (August 11, 2020): 1–16. http://dx.doi.org/10.1155/2020/1635621.

Full text
Abstract:
Discriminative feature extraction is a challenge for data-driven fault diagnosis. Although deep learning algorithms can automatically learn a good set of features without manual intervention, the lack of domain knowledge greatly limits the performance improvement, especially for nonstationary and nonlinear signals. This paper develops a multiscale information fusion-based stacked sparse autoencoder fault diagnosis method. The autoencoder takes advantage of the multiscale normalized frequency spectrum information obtained by dual-tree complex wavelet transform as input. Accordingly, the multiscale normalized features guarantee the translational invariance for signal characteristics, and the stacked sparse autoencoder benefits the unsupervised feature learning and ensures accurate and stable diagnosis performance. The developed method is performed on motor bearing vibration signals and worm gearbox vibration signals, respectively. The results confirm that the developed method can accommodate changing working conditions, be free of manual feature extraction, and perform better than the existing intelligent diagnosis methods.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Guang, Xianjie Gu, Zhengyong Ren, Qihong Wu, Xiaoqiong Liu, Liang Zhang, Donghan Xiao, and Cong Zhou. "Deep Learning Optimized Dictionary Learning and Its Application in Eliminating Strong Magnetotelluric Noise." Minerals 12, no. 8 (August 12, 2022): 1012. http://dx.doi.org/10.3390/min12081012.

Full text
Abstract:
The noise suppression method based on dictionary learning has shown great potential in magnetotelluric (MT) data processing. However, the constraints used in the existing algorithm’s method need to set manually, which significantly limits its application. To solve this problem, we propose a deep learning optimized dictionary learning denoising method. We use a deep convolutional network to learn the characteristic parameters of high-quality MT data independently and then use them as the constraints for dictionary learning so as to achieve fully adaptive sparse decomposition. The method uses unified parameters for all data and completely eliminates subjective bias, which makes it possible to batch-process MT data using sparse decomposition. The processing results of simulated and field data examples show that the new method has good adaptability and can achieve recognition with high accuracy. After processing with our method, the apparent resistivity and phase curves became smoother and more continuous, and the results were validated by the remote reference method. Our method can be an effective alternative method when no remote reference station is set up or the remote reference processing is not effective.
APA, Harvard, Vancouver, ISO, and other styles
33

Munir, Muhammad Asif, Muhammad Aqeel Aslam, Muhammad Shafique, Rauf Ahmed, and Zafar Mehmood. "Deep Stacked Sparse Autoencoders – A Breast Cancer Classifier." Mehran University Research Journal of Engineering and Technology 41, no. 1 (January 1, 2022): 41–52. http://dx.doi.org/10.22581/muet1982.2201.05.

Full text
Abstract:
Breast cancer is among one of the non-communicable diseases that is the major cause of women's mortalities around the globe. Early diagnosis of breast cancer has significant death reduction effects. This chronic disease requires careful and lengthy prognostic procedures before reaching a rational decision about optimum clinical treatments. During the last decade, in Computer-Aided Diagnostic (CAD) systems, machine learning and deep learning-based approaches are being implemented to provide solutions with the least error probabilities in breast cancer screening practices. These methods are determined for optimal and acceptable results with little human intervention. In this article, Deep Stacked Sparse Autoencoders for breast cancer diagnostic and classification are proposed. Anticipated algorithms and methods are evaluated and tested using the platform of MATLAB R2017b on Breast Cancer Wisconsin (Diagnostic) Data Set (WDBC) and achieved results surpass all the CAD techniques and methods in terms of classification accuracy and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
34

Rodziewicz-Bielewicz, Jan, and Marcin Korzeń. "Comparison of Graph Fitting and Sparse Deep Learning Model for Robot Pose Estimation." Sensors 22, no. 17 (August 29, 2022): 6518. http://dx.doi.org/10.3390/s22176518.

Full text
Abstract:
The paper presents a simple, yet robust computer vision system for robot arm tracking with the use of RGB-D cameras. Tracking means to measure in real time the robot state given by three angles and with known restrictions about the robot geometry. The tracking system consists of two parts: image preprocessing and machine learning. In the machine learning part, we compare two approaches: fitting the robot pose to the point cloud and fitting the convolutional neural network model to the sparse 3D depth images. The advantage of the presented approach is direct use of the point cloud transformed to the sparse image in the network input and use of sparse convolutional and pooling layers (sparse CNN). The experiments confirm that the robot tracking is performed in real time and with an accuracy comparable to the accuracy of the depth sensor.
APA, Harvard, Vancouver, ISO, and other styles
35

Rahal, Najoua, Maroua Tounsi, Amir Hussain, and Adel M. Alimi. "Deep Sparse Auto-Encoder Features Learning for Arabic Text Recognition." IEEE Access 9 (2021): 18569–84. http://dx.doi.org/10.1109/access.2021.3053618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Sun, Chang, Yitong Liu, and Hongwen Yang. "Degradation-Aware Deep Learning Framework for Sparse-View CT Reconstruction." Tomography 7, no. 4 (December 9, 2021): 932–49. http://dx.doi.org/10.3390/tomography7040077.

Full text
Abstract:
Sparse-view CT reconstruction is a fundamental task in computed tomography to overcome undesired artifacts and recover the details of textual structure in degraded CT images. Recently, many deep learning-based networks have achieved desirable performances compared to iterative reconstruction algorithms. However, the performance of these methods may severely deteriorate when the degradation strength of the test image is not consistent with that of the training dataset. In addition, these methods do not pay enough attention to the characteristics of different degradation levels, so solely extending the training dataset with multiple degraded images is also not effective. Although training plentiful models in terms of each degradation level can mitigate this problem, extensive parameter storage is involved. Accordingly, in this paper, we focused on sparse-view CT reconstruction for multiple degradation levels. We propose a single degradation-aware deep learning framework to predict clear CT images by understanding the disparity of degradation in both the frequency domain and image domain. The dual-domain procedure can perform particular operations at different degradation levels in frequency component recovery and spatial details reconstruction. The peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and visual results demonstrate that our method outperformed the classical deep learning-based reconstruction methods in terms of effectiveness and scalability.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Xueting, Fei Ma, Yuanke Zhang, Jiajun Wang, Chengbo Liu, and Jing Meng. "Sparse-sampling photoacoustic computed tomography: Deep learning vs. compressed sensing." Biomedical Signal Processing and Control 71 (January 2022): 103233. http://dx.doi.org/10.1016/j.bspc.2021.103233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Janarthan, Sivasubramaniam, Selvarajah Thuseethan, Sutharshan Rajasegarar, Qiang Lyu, Yongqiang Zheng, and John Yearwood. "Deep Metric Learning Based Citrus Disease Classification With Sparse Data." IEEE Access 8 (2020): 162588–600. http://dx.doi.org/10.1109/access.2020.3021487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Singhal, Vanika, and Angshul Majumdar. "Row-Sparse Discriminative Deep Dictionary Learning for Hyperspectral Image Classification." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 11, no. 12 (December 2018): 5019–28. http://dx.doi.org/10.1109/jstars.2018.2877769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Fan, Jiayuan, Tao Chen, and Shijian Lu. "Superpixel Guided Deep-Sparse-Representation Learning for Hyperspectral Image Classification." IEEE Transactions on Circuits and Systems for Video Technology 28, no. 11 (November 2018): 3163–73. http://dx.doi.org/10.1109/tcsvt.2017.2746684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Nie, Liqiang, Meng Wang, Luming Zhang, Shuicheng Yan, Bo Zhang, and Tat-Seng Chua. "Disease Inference from Health-Related Questions via Sparse Deep Learning." IEEE Transactions on Knowledge and Data Engineering 27, no. 8 (August 1, 2015): 2107–19. http://dx.doi.org/10.1109/tkde.2015.2399298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Gongming, Qing-Shan Jia, Junfei Qiao, Jing Bi, and Caixia Liu. "A sparse deep belief network with efficient fuzzy learning framework." Neural Networks 121 (January 2020): 430–40. http://dx.doi.org/10.1016/j.neunet.2019.09.035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Gong, Maoguo, Jia Liu, Hao Li, Qing Cai, and Linzhi Su. "A Multiobjective Sparse Feature Learning Model for Deep Neural Networks." IEEE Transactions on Neural Networks and Learning Systems 26, no. 12 (December 2015): 3263–77. http://dx.doi.org/10.1109/tnnls.2015.2469673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Alam, Mahbubul, Lasitha S. Vidyaratne, and Khan M. Iftekharuddin. "Sparse Simultaneous Recurrent Deep Learning for Robust Facial Expression Recognition." IEEE Transactions on Neural Networks and Learning Systems 29, no. 10 (October 2018): 4905–16. http://dx.doi.org/10.1109/tnnls.2017.2776248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Chao, Jian Wang, Jingjing Wang, and Xudong Zhang. "Deep-Reinforcement-Learning-Based Autonomous UAV Navigation With Sparse Rewards." IEEE Internet of Things Journal 7, no. 7 (July 2020): 6180–90. http://dx.doi.org/10.1109/jiot.2020.2973193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bai, Junhang, Yongliang Sun, Weixiao Meng, and Cheng Li. "Wi-Fi Fingerprint-Based Indoor Mobile User Localization Using Deep Learning." Wireless Communications and Mobile Computing 2021 (January 8, 2021): 1–12. http://dx.doi.org/10.1155/2021/6660990.

Full text
Abstract:
In recent years, deep learning has been used for Wi-Fi fingerprint-based localization to achieve a remarkable performance, which is expected to satisfy the increasing requirements of indoor location-based service (LBS). In this paper, we propose a Wi-Fi fingerprint-based indoor mobile user localization method that integrates a stacked improved sparse autoencoder (SISAE) and a recurrent neural network (RNN). We improve the sparse autoencoder by adding an activity penalty term in its loss function to control the neuron outputs in the hidden layer. The encoders of three improved sparse autoencoders are stacked to obtain high-level feature representations of received signal strength (RSS) vectors, and an SISAE is constructed for localization by adding a logistic regression layer as the output layer to the stacked encoders. Meanwhile, using the previous location coordinates computed by the trained SISAE as extra inputs, an RNN is employed to compute more accurate current location coordinates for mobile users. The experimental results demonstrate that the mean error of the proposed SISAE-RNN for mobile user localization can be reduced to 1.60 m.
APA, Harvard, Vancouver, ISO, and other styles
47

Bai, Junhang, Yongliang Sun, Weixiao Meng, and Cheng Li. "Wi-Fi Fingerprint-Based Indoor Mobile User Localization Using Deep Learning." Wireless Communications and Mobile Computing 2021 (January 8, 2021): 1–12. http://dx.doi.org/10.1155/2021/6660990.

Full text
Abstract:
In recent years, deep learning has been used for Wi-Fi fingerprint-based localization to achieve a remarkable performance, which is expected to satisfy the increasing requirements of indoor location-based service (LBS). In this paper, we propose a Wi-Fi fingerprint-based indoor mobile user localization method that integrates a stacked improved sparse autoencoder (SISAE) and a recurrent neural network (RNN). We improve the sparse autoencoder by adding an activity penalty term in its loss function to control the neuron outputs in the hidden layer. The encoders of three improved sparse autoencoders are stacked to obtain high-level feature representations of received signal strength (RSS) vectors, and an SISAE is constructed for localization by adding a logistic regression layer as the output layer to the stacked encoders. Meanwhile, using the previous location coordinates computed by the trained SISAE as extra inputs, an RNN is employed to compute more accurate current location coordinates for mobile users. The experimental results demonstrate that the mean error of the proposed SISAE-RNN for mobile user localization can be reduced to 1.60 m.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Wenyu, Baozhu Li, Zhen Huang, and Lei Zhu. "Deep Learning-Based Localization with Urban Electromagnetic and Geographic Information." Wireless Communications and Mobile Computing 2022 (August 21, 2022): 1–10. http://dx.doi.org/10.1155/2022/9680479.

Full text
Abstract:
There is a growing demand for localization of illegal signal sources, aiming to guarantee the security of urban electromagnetic environment. The performance of traditional localization methods is limited due to the non-line-of-sight (NLOS) propagation and sparse layouts of sensors. In this paper, a deep learning-based localization method is proposed to overcome these issues in urban scenarios. Firstly, a model of electromagnetic wave propagation considered with geographic information is proposed to prepare reliable datasets for intelligent cognition of urban electromagnetic environment. Then, this paper improves an hourglass neural network which consists of downsampling and upsampling layers to learn the propagation features from sensing data. The core modules of VGG and ResNet are, respectively, utilized as feature extractors in downsampling. Moreover, this paper proposes a weighted loss function to expand the attention on position features, in order to improve the performance of localization with sparse layouts of sensors. Representative numerical results are discussed to assess the proposed method. ResNet-based extractor performs more efficiently than VGG-based extractor, and the proposed weighted loss function increases the localization accuracy by more than 50%. Additionally, the established geographic model supports qualitative and quantitative evaluation of the robustness with varied degree of NLOS propagation. Compared with other deep learning-based algorithms, the proposed method presents the more robust and superior performance under severe NLOS propagation and sparse sensing conditions.
APA, Harvard, Vancouver, ISO, and other styles
49

Kokhanovskiy, Alexey, Nikita Shabalov, Alexandr Dostovalov, and Alexey Wolf. "Highly Dense FBG Temperature Sensor Assisted with Deep Learning Algorithms." Sensors 21, no. 18 (September 15, 2021): 6188. http://dx.doi.org/10.3390/s21186188.

Full text
Abstract:
In this paper, we demonstrate the application of deep neural networks (DNNs) for processing the reflectance spectrum from a fiberoptic temperature sensor composed of densely inscribed fiber Bragg gratings (FBG). Such sensors are commonly avoided in practice since close arrangement of short FBGs results in distortion of the spectrum caused by mutual interference between gratings. In our work the temperature sensor contained 50 FBGs with the length of 0.95 mm, edge-to-edge distance of 0.05 mm and arranged in the 1500–1600 nm spectral range. Instead of solving the direct peak detection problem for distorted signal, we applied DNNs to predict temperature distribution from entire reflectance spectrum registered by the sensor. We propose an experimental calibration setup where the dense FBG sensor is located close to an array of sparse FBG sensors. The goal of DNNs is to predict the positions of the reflectance peaks of the reference sparse FBG sensors from the reflectance spectrum of the dense FBG sensor. We show that a convolution neural network is able to predict the positions of FBG reflectance peaks of sparse sensors with mean absolute error of 7.8 pm that is slightly higher than the hardware reused interrogator equal to 5 pm. We believe that dense FBG sensors assisted with DNNs have a high potential to increase spatial resolution and also extend the length of a fiber optical sensors.
APA, Harvard, Vancouver, ISO, and other styles
50

Ahamed, Saahira, Ebtesam Shadadi, Latifah Alamer, and Mousa Khubrani. "Deep Anomaly Net: Detecting Moving Object Abnormal Activity Using Tensor Flow." Journal of Internet Services and Information Security 12, no. 4 (November 30, 2022): 116–25. http://dx.doi.org/10.58346/jisis.2022.i4.008.

Full text
Abstract:
Sparse secret writing, primarily based on abnormal detection, has shown promising performance, key features being feature learning, subtle illustrations, and vocabulary learning. propose a replacement neural network for anomaly detection called AnomalyNet by deep feature learning, sparse representation, and dictionary learning in 3 collaborative neural processing units. In particular, to obtain higher functions, form the motion fusion block in the middle of the function transfer block to enjoy the benefits of eliminating background noise, motion capture, and eliminating information deficit. In addition, to deal with some of the shortcomings (such as non-adaptive updating) of existing sparse coding optimizers and to take advantage of the advantages of neural network (such as parallel computation), design a unique continuous neural network, which will be told as a thin illustration of a docent dictionary by proposing a consistent iterative rule of hard threshold (adaptive ISTA) and the reformulation of adaptive ISTA as a substitute for long-term memory (LSTM). As far as we know, this may be one of the first works to link the `1-solvers and LSTM and offer new insights into LSTM and model-based refinement (or so-called differential programming), but primarily in the form of detection-based sparse secret writing anomaly. In-depth, experiments show the progressive performance of our technique in the task of detecting abnormal events.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography