Journal articles on the topic 'Neural network subspace'

To see the other types of publications on this topic, follow the link: Neural network subspace.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Neural network subspace.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Oja, Erkki. "NEURAL NETWORKS, PRINCIPAL COMPONENTS, AND SUBSPACES." International Journal of Neural Systems 01, no. 01 (January 1989): 61–68. http://dx.doi.org/10.1142/s0129065789000475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A single neuron with Hebbian-type learning for the connection weights, and with nonlinear internal feedback, has been shown to extract the statistical principal components of its stationary input pattern sequence. A generalization of this model to a layer of neuron units is given, called the Subspace Network, which yields a multi-dimensional, principal component subspace. This can be used as an associative memory for the input vectors or as a module in nonsupervised learning of data clusters in the input space. It is also able to realize a powerful pattern classifier based on projections on class subspaces. Some classification results for natural textures are given.
2

Edraki, Marzieh, Nazanin Rahnavard, and Mubarak Shah. "SubSpace Capsule Network." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 10745–53. http://dx.doi.org/10.1609/aaai.v34i07.6703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Convolutional neural networks (CNNs) have become a key asset to most of fields in AI. Despite their successful performance, CNNs suffer from a major drawback. They fail to capture the hierarchy of spatial relation among different parts of an entity. As a remedy to this problem, the idea of capsules was proposed by Hinton. In this paper, we propose the SubSpace Capsule Network (SCN) that exploits the idea of capsule networks to model possible variations in the appearance or implicitly-defined properties of an entity through a group of capsule subspaces instead of simply grouping neurons to create capsules. A capsule is created by projecting an input feature vector from a lower layer onto the capsule subspace using a learnable transformation. This transformation finds the degree of alignment of the input with the properties modeled by the capsule subspace.We show that SCN is a general capsule network that can successfully be applied to both discriminative and generative models without incurring computational overhead compared to CNN during test time. Effectiveness of SCN is evaluated through a comprehensive set of experiments on supervised image classification, semi-supervised image classification and high-resolution image generation tasks using the generative adversarial network (GAN) framework. SCN significantly improves the performance of the baseline models in all 3 tasks.
3

Zhi, Chuan, Ling Hua Guo, Mei Yun Zhang, and Yi Shi. "Research on Dynamic Subspace Divided BP Neural Network Identification Method of Color Space Transform Model." Advanced Materials Research 174 (December 2010): 97–100. http://dx.doi.org/10.4028/www.scientific.net/amr.174.97.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In order to improve the precision for BP neural network model color space conversion, this paper takes RGB color space and CIE L*a*b* color space as an example. Based on the input value, the color space is dynamically divided into many subspaces. To adopt the BP neural network in the subspace can effectively avoiding the local optimum of BP neural network in the whole color space and greatly improving the color space conversion precision.
4

Funabashi, Masatoshi. "Synthetic Modeling of Autonomous Learning with a Chaotic Neural Network." International Journal of Bifurcation and Chaos 25, no. 04 (April 2015): 1550054. http://dx.doi.org/10.1142/s0218127415500546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We investigate the possible role of intermittent chaotic dynamics called chaotic itinerancy, in interaction with nonsupervised learnings that reinforce and weaken the neural connection depending on the dynamics itself. We first performed hierarchical stability analysis of the Chaotic Neural Network model (CNN) according to the structure of invariant subspaces. Irregular transition between two attractor ruins with positive maximum Lyapunov exponent was triggered by the blowout bifurcation of the attractor spaces, and was associated with riddled basins structure. We secondly modeled two autonomous learnings, Hebbian learning and spike-timing-dependent plasticity (STDP) rule, and simulated the effect on the chaotic itinerancy state of CNN. Hebbian learning increased the residence time on attractor ruins, and produced novel attractors in the minimum higher-dimensional subspace. It also augmented the neuronal synchrony and established the uniform modularity in chaotic itinerancy. STDP rule reduced the residence time on attractor ruins, and brought a wide range of periodicity in emerged attractors, possibly including strange attractors. Both learning rules selectively destroyed and preserved the specific invariant subspaces, depending on the neuron synchrony of the subspace where the orbits are situated. Computational rationale of the autonomous learning is discussed in connectionist perspective.
5

Mahomud, V. A., A. S. Hadi, N. K. Wafi, and S. M. R. Taha. "DIRECTION OF ARRIVAL USING PCA NEURALNETWORKS." Journal of Engineering 10, no. 1 (March 13, 2024): 83–89. http://dx.doi.org/10.31026/j.eng.2004.01.07.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper adapted the neural network for the estimating of the direction of arrival (DOA). It usesan unsupervised adaptive neural network with APEX algorithm to extract the principal componentsthat in turn, are used by Capon method to estimate the DOA, where by the PCA neural network wetake signal subspace only and use it in Capon (i.e. we will ignore the noise subspace, and take thesignal subspace only)
6

Menghi, Nicholas, Kemal Kacar, and Will Penny. "Multitask learning over shared subspaces." PLOS Computational Biology 17, no. 7 (July 6, 2021): e1009092. http://dx.doi.org/10.1371/journal.pcbi.1009092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper uses constructs from machine learning to define pairs of learning tasks that either shared or did not share a common subspace. Human subjects then learnt these tasks using a feedback-based approach and we hypothesised that learning would be boosted for shared subspaces. Our findings broadly supported this hypothesis with either better performance on the second task if it shared the same subspace as the first, or positive correlations over task performance for shared subspaces. These empirical findings were compared to the behaviour of a Neural Network model trained using sequential Bayesian learning and human performance was found to be consistent with a minimal capacity variant of this model. Networks with an increased representational capacity, and networks without Bayesian learning, did not show these transfer effects. We propose that the concept of shared subspaces provides a useful framework for the experimental study of human multitask and transfer learning.
7

Cao, Xiang, and A.-long Yu. "Multi-AUV Cooperative Target Search Algorithm in 3-D Underwater Workspace." Journal of Navigation 70, no. 6 (June 30, 2017): 1293–311. http://dx.doi.org/10.1017/s0373463317000376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
To improve the efficiency of multiple Autonomous Underwater Vehicles (multi-AUV) cooperative target search in a Three-Dimensional (3D) underwater workspace, an integrated algorithm is proposed by combining a Self-Organising Map (SOM), neural network and Glasius Bioinspired Neural Network (GBNN). With this integrated algorithm, the 3D underwater workspace is first divided into subspaces dependent on the abilities of the AUV team members. After that, tasks are allocated to each subspace for an AUV by SOM. Finally, AUVs move to the assigned subspace in the shortest way and start their search task by GBNN. This integrated algorithm, by avoiding overlapping search paths and raising the coverage rate, can reduce energy consumption of the whole multi-AUV system. The simulation results show that the proposed algorithm is capable of guiding multi-AUV to achieve a multiple target search task with higher efficiency and adaptability compared with a more traditional bioinspired neural network algorithm.
8

Laaksonen, Jorma, and Erkki Oja. "Learning Subspace Classifiers and Error-Corrective Feature Extraction." International Journal of Pattern Recognition and Artificial Intelligence 12, no. 04 (June 1998): 423–36. http://dx.doi.org/10.1142/s0218001498000270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Subspace methods are a powerful class of statistical pattern classification algorithms. The subspaces form semiparametric representations of the pattern classes in the form of principal components. In this sense, subspace classification methods are an application of classical optimal data compression techniques. Additionally, the subspace formalism can be given a neural network interpretation. There are learning versions of the subspace classification methods, in which error-driven learning procedures are applied to the subspaces in order to reduce the number of misclassified vectors. An algorithm for iterative selection of the subspace dimensions is presented in this paper. Likewise, a modified formula for calculating the projection lengths in the subspaces is investigated. The principle of adaptive learning in subspace methods can further be applied to feature extraction. In our work, we have studied two adaptive feature extraction schemes. The adaptation process is directed by errors occurring in the classifier. Unlike most traditional classifier models which take the preceding feature extraction stage as given, this scheme allows for reducing the loss of information in the feature extraction stage. The enhanced overall classification performance resulting from the added adaptivity is demonstrated with experiments in which recognition of handwritten digits has been used as an exemplary application.
9

Chandar, Sarath, Mitesh M. Khapra, Hugo Larochelle, and Balaraman Ravindran. "Correlational Neural Networks." Neural Computation 28, no. 2 (February 2016): 257–85. http://dx.doi.org/10.1162/neco_a_00801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)–based approaches and autoencoder (AE)–based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches.
10

Kizaric, Ben, and Daniel Pimentel-Alarcón. "Principle Component Trees and Their Persistent Homology." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (March 24, 2024): 13220–29. http://dx.doi.org/10.1609/aaai.v38i12.29222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Low dimensional models like PCA are often used to simplify complex datasets by learning a single approximating subspace. This paradigm has expanded to union of subspaces models, like those learned by subspace clustering. In this paper, we present Principal Component Trees (PCTs), a graph structure that generalizes these ideas to identify mixtures of components that together describe the subspace structure of high-dimensional datasets. Each node in a PCT corresponds to a principal component of the data, and the edges between nodes indicate the components that must be mixed to produce a subspace that approximates a portion of the data. In order to construct PCTs, we propose two angle-distribution hypothesis tests to detect subspace clusters in the data. To analyze, compare, and select the best PCT model, we define two persistent homology measures that describe their shape. We show our construction yields two key properties of PCTs, namely ancestral orthogonality and non-decreasing singular values. Our main theoretical results show that learning PCTs reduces to PCA under multivariate normality, and that PCTs are efficient parameterizations of intersecting union of subspaces. Finally, we use PCTs to analyze neural network latent space, word embeddings, and reference image datasets.
11

Ling, Junyao. "Score Prediction of Sports Events Based on Parallel Self-Organizing Nonlinear Neural Network." Computational Intelligence and Neuroscience 2022 (January 15, 2022): 1–10. http://dx.doi.org/10.1155/2022/4882309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper introduces the basic concepts and main characteristics of parallel self-organizing networks and analyzes and predicts parallel self-organizing networks through neural networks and their hybrid models. First, we train and describe the law and development trend of the parallel self-organizing network through historical data of the parallel self-organizing network and then use the discovered law to predict the performance of the new data and compare it with its true value. Second, this paper takes the prediction and application of chaotic parallel self-organizing networks as the main research line and neural networks as the main research method. Based on the summary and analysis of traditional neural networks, it jumps out of inertial thinking and first proposes phase space. Reconstruction parameters and neural network structure parameters are unified and optimized, and then, the idea of dividing the phase space into multiple subspaces is proposed. The multi-neural network method is adopted to track and predict the local trajectory of the chaotic attractor in the subspace with high precision to improve overall forecasting performance. During the experiment, short-term and longer-term prediction experiments were performed on the chaotic parallel self-organizing network. The results show that not only the accuracy of the simulation results is greatly improved but also the prediction performance of the real data observed in reality is also greatly improved. When predicting the parallel self-organizing network, the minimum error of the self-organizing difference model is 0.3691, and the minimum error of the self-organizing autoregressive neural network is 0.008, and neural network minimum error is 0.0081. In the parallel self-organizing network prediction of sports event scores, the errors of the above models are 0.0174, 0.0081, 0.0135, and 0.0381, respectively.
12

Pehlevan, Cengiz, Tao Hu, and Dmitri B. Chklovskii. "A Hebbian/Anti-Hebbian Neural Network for Linear Subspace Learning: A Derivation from Multidimensional Scaling of Streaming Data." Neural Computation 27, no. 7 (July 2015): 1461–95. http://dx.doi.org/10.1162/neco_a_00745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Neural network models of early sensory processing typically reduce the dimensionality of streaming input data. Such networks learn the principal subspace, in the sense of principal component analysis, by adjusting synaptic weights according to activity-dependent learning rules. When derived from a principled cost function, these rules are nonlocal and hence biologically implausible. At the same time, biologically plausible local rules have been postulated rather than derived from a principled cost function. Here, to bridge this gap, we derive a biologically plausible network for subspace learning on streaming data by minimizing a principled cost function. In a departure from previous work, where cost was quantified by the representation, or reconstruction, error, we adopt a multidimensional scaling cost function for streaming data. The resulting algorithm relies only on biologically plausible Hebbian and anti-Hebbian local learning rules. In a stochastic setting, synaptic weights converge to a stationary state, which projects the input data onto the principal subspace. If the data are generated by a nonstationary distribution, the network can track the principal subspace. Thus, our result makes a step toward an algorithmic theory of neural computation.
13

Tituaña, Luis, and Yunjun Xu. "Subspace Structured Neural Network for Rapid Trajectory Optimization." IFAC-PapersOnLine 56, no. 3 (2023): 37–42. http://dx.doi.org/10.1016/j.ifacol.2023.11.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Chuan, Zhi, Zhou Shi-Sheng, and Shi Yi. "The Research on Color Space Transfer Model Based on Dynamic Subspace Divided BP Neural Network." International Journal of Engineering and Technology 2, no. 5 (2010): 447–52. http://dx.doi.org/10.7763/ijet.2010.v2.163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kohonen, T. "The Self-Organising Map, a Possible Model of Brain Maps." Perception 26, no. 1_suppl (August 1997): 204. http://dx.doi.org/10.1068/v970002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We stipulate that the following three categories of dynamic phenomena must be present in a realistic neural-network model: (i) activation; (ii) adaptation; (iii) plasticity control. In most neural models only activation and adaptation are present. The self-organising map (SOM) algorithm is the only neural-network model that includes all the three phenomena. Its modelling laws include the following partial functions: (1) Some parallel computing mechanism for the specification of a cell in a piece of cell mass whose parametric representation matches or responds best to the afferent input. This cell is called the ‘winner’. (2) Control of some learning factor in the cells in the neighbourhood of the ‘winner’ so that only this neighbourhood is adapted to the current input. By virtue of the ‘neighbourhood learning,’ the SOM forms spatially ordered maps of sensory experiences, which resemble the maps observed in the brain. The newest version of the SOM is the ASSOM (adaptive-subspace SOM). The adaptive processing units of ASSOM are able to represent signal subspaces, not just templates of the original patterns. A signal subspace is an invariance group; therefore the processing units of ASSOM are able to respond invariantly, eg to moving and transforming patterns, in a similar fashion as the complex cells in the cortex.
16

Xu, Lei, Adam Krzyzak, and Erkki Oja. "NEURAL NETS FOR DUAL SUBSPACE PATTERN RECOGNITION METHOD." International Journal of Neural Systems 02, no. 03 (January 1991): 169–84. http://dx.doi.org/10.1142/s0129065791000169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A new modification of the subspace pattern recognition method, called the dual subspace pattern recognition (DSPR) method, is proposed, and neural network models combining both constrained Hebbian and anti-Hebbian learning rules are developed for implementing the DSPR method. An experimental comparison is made by using our model and a three-layer forward net with backpropagation learning. The results illustrate that our model can outperform the backpropagation model in suitable applications.
17

Tran, Tich Phuoc, Thi Thanh Sang Nguyen, Poshiang Tsai, and Xiaoying Kong. "BSPNN: boosted subspace probabilistic neural network for email security." Artificial Intelligence Review 35, no. 4 (January 1, 2011): 369–82. http://dx.doi.org/10.1007/s10462-010-9198-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

LIU, ZHI-QIANG. "ADAPTIVE SUBSPACE SELF-ORGANIZING MAP AND ITS APPLICATIONS IN FACE RECOGNITION." International Journal of Image and Graphics 02, no. 04 (October 2002): 519–40. http://dx.doi.org/10.1142/s0219467802000834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recently Kohonen proposed the Adaptive Subspace Self-Organizing Map (ASSOM) for extracting subspace detectors from the input data. In the ASSOM, all subspaces represented by the neurons are constrained to intersect the origin in the feature space. As a result, it cannot compensate for the mean present in the data set. In this paper we propose affined subspaces for constructing a set of linear manifolds. This gives rise to a modified ASSOM known as the Adaptive Manifold Self-Organizing Map (AMSOM). In some cases, AMSOM performs many orders of magnitude better than ASSOM. We apply AMSOM to face recognition. Since some face images may share a manifold due to similarities present in the images, we use a multi-layer neural network to divide the manifold into sub-areas each of which corresponds to a single class, e.g., a face class for Smith. Our experiment results show that this approach performs better than those obtained using the standard Principal Component Analysis (PCA) method.
19

Wang, Pin, Shanshan Lv, Yongming Li, Qi Song, Linyu Li, Jiaxin Wang, and Hehua Zhang. "Hybrid Deep Transfer Network and Rotational Sample Subspace Ensemble Learning for Early Cancer Detection." Journal of Medical Imaging and Health Informatics 10, no. 10 (October 1, 2020): 2289–96. http://dx.doi.org/10.1166/jmihi.2020.3172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Accurate histopathology cell image classification plays an important role in early cancer detection and diagnosis. Currently, Convolutional Neural Network is used to assist pathologists for histopathology image classification. In the paper, a Min mice model was applied to evaluate the capability of Convolutional Neural Network features for detecting early-stage carcinogenesis. However, due to the limited histopathology images of the mice model, it may cause overfitting for the classification. Hence, hybrid deep transfer network and rotational sample subspace ensemble learning is proposed for the histopathology image classification. First, deep features are obtained by deep transfer network based on regularized loss functions. Then, the rotational sample subspace sampling is applied to increase the diversity between training sets. Subsequently, subspace projection learning is introduced to achieve dimensionality reduction. Finally, the ensemble learning is used for histopathology image classification. The proposed method was tested on 126 histopathology images of the mouse model. The experimental results demonstrate that the proposed method has achieved a remarkable classification accuracy (99.39%, 99.74%, 100%). It has demonstrated that the proposed approach is promising for early cancer diagnosis.
20

Wang, Pin, Shanshan Lv, Yongming Li, Qi Song, Linyu Li, Jiaxin Wang, and Hehua Zhang. "Hybrid Deep Transfer Network and Rotational Sample Subspace Ensemble Learning for Early Cancer Detection." Journal of Medical Imaging and Health Informatics 10, no. 10 (October 1, 2020): 2289–96. http://dx.doi.org/10.1166/jmihi.2020.31722289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Accurate histopathology cell image classification plays an important role in early cancer detection and diagnosis. Currently, Convolutional Neural Network is used to assist pathologists for histopathology image classification. In the paper, a Min mice model was applied to evaluate the capability of Convolutional Neural Network features for detecting early-stage carcinogenesis. However, due to the limited histopathology images of the mice model, it may cause overfitting for the classification. Hence, hybrid deep transfer network and rotational sample subspace ensemble learning is proposed for the histopathology image classification. First, deep features are obtained by deep transfer network based on regularized loss functions. Then, the rotational sample subspace sampling is applied to increase the diversity between training sets. Subsequently, subspace projection learning is introduced to achieve dimensionality reduction. Finally, the ensemble learning is used for histopathology image classification. The proposed method was tested on 126 histopathology images of the mouse model. The experimental results demonstrate that the proposed method has achieved a remarkable classification accuracy (99.39%, 99.74%, 100%). It has demonstrated that the proposed approach is promising for early cancer diagnosis.
21

Li, Tai-fu, Wei Jia, Wei Zhou, Ji-ke Ge, Yu-cheng Liu, and Li-zhong Yao. "Incomplete Phase Space Reconstruction Method Based on Subspace Adaptive Evolution Approximation." Journal of Applied Mathematics 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/983051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The chaotic time series can be expanded to the multidimensional space by phase space reconstruction, in order to reconstruct the dynamic characteristics of the original system. It is difficult to obtain complete phase space for chaotic time series, as a result of the inconsistency of phase space reconstruction. This paper presents an idea of subspace approximation. The chaotic time series prediction based on the phase space reconstruction can be considered as the subspace approximation problem in different neighborhood at different time. The common static neural network approximation is suitable for a trained neighborhood, but it cannot ensure its generalization performance in other untrained neighborhood. The subspace approximation of neural network based on the nonlinear extended Kalman filtering (EKF) is a dynamic evolution approximation from one neighborhood to another. Therefore, in view of incomplete phase space, due to the chaos phase space reconstruction, we put forward subspace adaptive evolution approximation method based on nonlinear Kalman filtering. This method is verified by multiple sets of wind speed prediction experiments in Wulong city, and the results demonstrate that it possesses higher chaotic prediction accuracy.
22

WU, JING, HONG YAN, and ANDREW CHALMERS. "HANDWRITTEN DIGIT RECOGNITION USING TWO-LAYER SELF-ORGANIZING MAPS." International Journal of Neural Systems 05, no. 04 (December 1994): 357–62. http://dx.doi.org/10.1142/s0129065794000347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we present a two-layer self-organizing neural network based method for handwritten digit recognition. The network consists of a base layer self-organizing map and a set of corresponding maps in the second layer. The input patterns are partitioned into subspace in the first layer. Patterns in a subspace are led to the second layer and a corresponding map is built according to the first layer performance. In the classification process, each pattern searches for several closest nodes from the base map and then it is classified into a specified class by determining the nearest model of the corresponding maps in the second layer. The new method yielded higher accuracy and faster performance than the ordinary self-organizing neural network.
23

Li, Jiamu, Ji Zhang, Mohamed Jaward Bah, Jian Wang, Youwen Zhu, Gaoming Yang, Lingling Li, and Kexin Zhang. "An Auto-Encoder with Genetic Algorithm for High Dimensional Data: Towards Accurate and Interpretable Outlier Detection." Algorithms 15, no. 11 (November 15, 2022): 429. http://dx.doi.org/10.3390/a15110429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
When dealing with high-dimensional data, such as in biometric, e-commerce, or industrial applications, it is extremely hard to capture the abnormalities in full space due to the curse of dimensionality. Furthermore, it is becoming increasingly complicated but essential to provide interpretations for outlier detection results in high-dimensional space as a consequence of the large number of features. To alleviate these issues, we propose a new model based on a Variational AutoEncoder and Genetic Algorithm (VAEGA) for detecting outliers in subspaces of high-dimensional data. The proposed model employs a neural network to create a probabilistic dimensionality reduction variational autoencoder (VAE) that applies its low-dimensional hidden space to characterize the high-dimensional inputs. Then, the hidden vector is sampled randomly from the hidden space to reconstruct the data so that it closely matches the input data. The reconstruction error is then computed to determine an outlier score, and samples exceeding the threshold are tentatively identified as outliers. In the second step, a genetic algorithm (GA) is used as a basis for examining and analyzing the abnormal subspace of the outlier set obtained by the VAE layer. After encoding the outlier dataset’s subspaces, the degree of anomaly for the detected subspaces is calculated using the redefined fitness function. Finally, the abnormal subspace is calculated for the detected point by selecting the subspace with the highest degree of anomaly. The clustering of abnormal subspaces helps filter outliers that are mislabeled (false positives), and the VAE layer adjusts the network weights based on the false positives. When compared to other methods using five public datasets, the VAEGA outlier detection model results are highly interpretable and outperform or have competitive performance compared to current contemporary methods.
24

Zhang, Long, Nana Wang, Jieli Wei, and Zhuyin Ren. "Exploring active subspace for neural network prediction of oscillating combustion." Combustion Theory and Modelling 25, no. 3 (April 16, 2021): 570–87. http://dx.doi.org/10.1080/13647830.2021.1915500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Prakash, M., and M. N. Murty. "Growing subspace pattern recognition methods and their neural-network models." IEEE Transactions on Neural Networks 8, no. 1 (January 1997): 161–68. http://dx.doi.org/10.1109/72.554201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wen Yao, Xiaoqian Chen, Yong Zhao, and M. van Tooren. "Concurrent Subspace Width Optimization Method for RBF Neural Network Modeling." IEEE Transactions on Neural Networks and Learning Systems 23, no. 2 (February 2012): 247–59. http://dx.doi.org/10.1109/tnnls.2011.2178560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

JANKOVIC, MARKO, and HIDEMITSU OGAWA. "TIME-ORIENTED HIERARCHICAL METHOD FOR COMPUTATION OF PRINCIPAL COMPONENTS USING SUBSPACE LEARNING ALGORITHM." International Journal of Neural Systems 14, no. 05 (October 2004): 313–23. http://dx.doi.org/10.1142/s0129065704002091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms — Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfilment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.
28

Nong, Ji Fu. "A Principal Components Analysis Self-Organizing Neural Network Model and Computational Experiment." Advanced Materials Research 756-759 (September 2013): 3330–35. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.3330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We propose a new self-organizing neural model that performs principal components analysis. It is also related to the adaptive subspace self-organizing map (ASSOM) network, but its training equations are simpler. Experimental results are reported, which show that the new model has better performance than the ASSOM network.
29

Liu, Hongxia. "Design of Neural Network Model for Cross-Media Audio and Video Score Recognition Based on Convolutional Neural Network Model." Computational Intelligence and Neuroscience 2022 (June 13, 2022): 1–12. http://dx.doi.org/10.1155/2022/4626867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, the residual convolutional neural network is used to extract the note features in the music score image to solve the problem of model degradation; then, multiscale feature fusion is used to fuse the feature information of different levels in the same feature map to enhance the feature representation ability of the model. A network composed of a bidirectional simple loop unit and a chained time series classification function is used to identify notes, parallelizing a large number of calculations, thereby speeding up the convergence speed of training, which also makes the data in the dataset no longer need to be strict with labels. Alignment also reduces the requirements on the dataset. Aiming at the problem that the existing cross-modal retrieval methods based on common subspace are insufficient for mining local consistency within modalities, a cross-modal retrieval method fused with graph convolution is proposed. The K-nearest neighbor algorithm is used to construct modal graphs for samples of different modalities, and the original features of samples from different modalities are encoded through a symmetric graph convolutional coding network and a symmetric multilayer fully connected coding network, and the encoded features are fused and input. We jointly optimize the intramodal semantic constraints and intermodal modality-invariant constraints in the common subspace to learn highly locally consistent and semantically consistent common representations for samples from different modalities. The error value of the experimental results is used to illustrate the effect of parameters such as the number of iterations and the number of neurons on the network. In order to more accurately illustrate that the generated music sequence is very similar to the original music sequence, the generated music sequence is also framed, and finally the music sequence spectrogram and spectrogram are generated. The accuracy of the experiment is illustrated by comparing the spectrogram and the spectrogram, and genre classification predictions are also performed on the generated music to show that the network can generate music of different genres.
30

Wen, Hui, Tongbin Li, Deli Chen, Jianlu Yang, and Yan Che. "An Optimized Neural Network Classification Method Based on Kernel Holistic Learning and Division." Mathematical Problems in Engineering 2021 (February 26, 2021): 1–16. http://dx.doi.org/10.1155/2021/8857818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An optimized neural network classification method based on kernel holistic learning and division (KHLD) is presented. The proposed method is based on the learned radial basis function (RBF) kernel as the research object. The kernel proposed here can be considered a subspace region consisting of the same pattern category in the training sample space. By extending the region of the sample space of the original instances, relevant information between instances can be obtained from the subspace, and the classifier’s boundary can be far from the original instances; thus, the robustness and generalization performance of the classifier are enhanced. In concrete implementation, a new pattern vector is generated within each RBF kernel according to the instance optimization and screening method to characterize KHLD. Experiments on artificial datasets and several UCI benchmark datasets show the effectiveness of our method.
31

MADHYASTHA, PRANAVA, JOSIAH WANG, and LUCIA SPECIA. "The role of image representations in vision to language tasks." Natural Language Engineering 24, no. 3 (March 21, 2018): 415–39. http://dx.doi.org/10.1017/s1351324918000116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractTasks that require modeling of both language and visual information, such as image captioning, have become very popular in recent years. Most state-of-the-art approaches make use of image representations obtained from a deep neural network, which are used to generate language information in a variety of ways with end-to-end neural-network-based models. However, it is not clear how different image representations contribute to language generation tasks. In this paper, we probe the representational contribution of the image features in an end-to-end neural modeling framework and study the properties of different types of image representations. We focus on two popular vision to language problems: The task of image captioning and the task of multimodal machine translation. Our analysis provides interesting insights into the representational properties and suggests that end-to-end approaches implicitly learn a visual-semantic subspace and exploit the subspace to generate captions.
32

Rosso, Marco Martino, Angelo Aloisio, Giansalvo Cirrincione, and Giuseppe Carlo Marano. "Subspace features and statistical indicators for neural network-based damage detection." Structures 56 (October 2023): 104792. http://dx.doi.org/10.1016/j.istruc.2023.06.123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Ringach, D. L., M. Carandini, G. Sapiro, and R. Shapley. "Cortical Circuitry Revealed by Reverse Correlation in the Orientation Domain." Perception 25, no. 1_suppl (August 1996): 130. http://dx.doi.org/10.1068/v96l0711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We applied a novel ‘white-noise’-like stimulation technique to study the neural circuitry underlying the orientation tuning of simple cortical cells in cats and monkeys. We generate an image sequence (the stimulus) by selecting, at each refresh time, a random image from a finite set s of orthonormal images. For simple cells, we have shown that the above stimulus allows one to compute the projection of the receptive field onto the subspace spanned by the vectors in s (Ringach et al, 1996 ARVO Proceedings in press). The calculation is based on the cross-correlation between the input image sequence and the cell's spike train output. In the present study, we selected s to be the subspace spanned by sine-wave gratings having a fixed spatial frequency but different orientations and spatial phases. A finite orthonormal basis for this subspace is a subset of the complete two-dimensional discrete Hartley basis functions. The choice of this ‘orientation-subspace’ allowed us to measure how the orientation tuning of the cells evolved in time at a particular spatial frequency. In addition to a sharp peak of activity at the optimal orientation of the cell, we observed secondary peaks of the orientation tuning curve at off-optimal orientations. Off-peak inhibition was also observed frequently. These results are difficult to reconcile with feedforward models of the neural network producing orientation tuning, but are consistent with recurrent cortical network models (Carandini and Ringach, 1996, paper presented at the Computation and Neural Systems Conference).
34

Zhao, Baigan, Yingping Huang, Hongjian Wei, and Xing Hu. "Ego-Motion Estimation Using Recurrent Convolutional Neural Networks through Optical Flow Learning." Electronics 10, no. 3 (January 20, 2021): 222. http://dx.doi.org/10.3390/electronics10030222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Visual odometry (VO) refers to incremental estimation of the motion state of an agent (e.g., vehicle and robot) by using image information, and is a key component of modern localization and navigation systems. Addressing the monocular VO problem, this paper presents a novel end-to-end network for estimation of camera ego-motion. The network learns the latent subspace of optical flow (OF) and models sequential dynamics so that the motion estimation is constrained by the relations between sequential images. We compute the OF field of consecutive images and extract the latent OF representation in a self-encoding manner. A Recurrent Neural Network is then followed to examine the OF changes, i.e., to conduct sequential learning. The extracted sequential OF subspace is used to compute the regression of the 6-dimensional pose vector. We derive three models with different network structures and different training schemes: LS-CNN-VO, LS-AE-VO, and LS-RCNN-VO. Particularly, we separately train the encoder in an unsupervised manner. By this means, we avoid non-convergence during the training of the whole network and allow more generalized and effective feature representation. Substantial experiments have been conducted on KITTI and Malaga datasets, and the results demonstrate that our LS-RCNN-VO outperforms the existing learning-based VO approaches.
35

Aparicio, Miguel, Tetyana Baydyk, Ernst Kussul, Graciela Velasco, and Carlos Vera. "Recognition of Bean Plants in Weeds Using Neural Networks." WSEAS TRANSACTIONS ON CIRCUITS AND SYSTEMS 21 (March 1, 2022): 34–39. http://dx.doi.org/10.37394/23201.2022.21.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The implementation of a random subspace classifier (RSC) neural network for the recognition of bean plants in weeds was proposed. The RSC neural classifier is based on the multilayer perceptron with a single layer of training connections, allowing a high speed training. The input of this classifier can be considered in various modes, for example, histograms of brightness, contrast, and orientation of micro contours. The RSC neural classifier has been developed for recognition and is applied to different tasks such as micromechanics, tissue recognition, and recognition of metallic textures. The RSC application can help automatize the industrial processes in agriculture. For this purpose, the computer vision based on neural networks can be used.
36

Zhi, Chuan, Zhi Jian Li, and Yi Shi. "Research on Robustness of Color Device Characteristic Methods Based on Artificial Intelligence." Applied Mechanics and Materials 262 (December 2012): 65–68. http://dx.doi.org/10.4028/www.scientific.net/amm.262.65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The nature of device color characteristic methods is the mutual conversion of device-dependent color space and device-independent color space. This paper does the comparative study on the robustness of some color space conversion methods which are based on fuzzy control, dynamic subspace divided BP neural network identification method, and fuzzy and neural identification method, by defining the robustness of color space conversion model and evaluation method. The result shows that the device color characteristic methods which are based on fuzzy and neural identification method can make the feature of BP neural network combine with fuzzy control to greatly improve the robustness of model.
37

Ma, Zhiheng, Dezheng Gao, Shaolei Yang, Xing Wei, and Yihong Gong. "Dataset Condensation via Expert Subspace Projection." Sensors 23, no. 19 (September 28, 2023): 8148. http://dx.doi.org/10.3390/s23198148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The rapid growth in dataset sizes in modern deep learning has significantly increased data storage costs. Furthermore, the training and time costs for deep neural networks are generally proportional to the dataset size. Therefore, reducing the dataset size while maintaining model performance is an urgent research problem that needs to be addressed. Dataset condensation is a technique that aims to distill the original dataset into a much smaller synthetic dataset while maintaining downstream training performance on any agnostic neural network. Previous work has demonstrated that matching the training trajectory between the synthetic dataset and the original dataset is more effective than matching the instantaneous gradient, as it incorporates long-range information. Despite the effectiveness of trajectory matching, it suffers from complex gradient unrolling across iterations, which leads to significant memory and computation overhead. To address this issue, this paper proposes a novel approach called Expert Subspace Projection (ESP), which leverages long-range information while avoiding gradient unrolling. Instead of strictly enforcing the synthetic dataset’s training trajectory to mimic that of the real dataset, ESP only constrains it to lie within the subspace spanned by the training trajectory of the real dataset. The memory-saving advantage offered by our method facilitates unbiased training on the complete set of synthetic images and seamless integration with other dataset condensation techniques. Through extensive experiments, we have demonstrated the effectiveness of our approach. Our method outperforms the trajectory matching method on CIFAR10 by 16.7% in the setting of 1 Image/Class, surpassing the previous state-of-the-art method by 3.2%.
38

Li, Changsheng, Chen Yang, Bo Liu, Ye Yuan, and Guoren Wang. "LRSC: Learning Representations for Subspace Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 8340–48. http://dx.doi.org/10.1609/aaai.v35i9.17014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Deep learning based subspace clustering methods have attracted increasing attention in recent years, where a basic theme is to non-linearly map data into a latent space, and then uncover subspace structures based upon the data self-expressiveness property. However, almost all existing deep subspace clustering methods only rely on target domain data, and always resort to shallow neural networks for modeling data, leaving huge room to design more effective representation learning mechanisms tailored for subspace clustering. In this paper, we propose a novel subspace clustering framework through learning precise sample representations. In contrast to previous approaches, the proposed method aims to leverage external data through constructing lots of relevant tasks to guide the training of the encoder, motivated by the idea of meta-learning. Considering limited layer structures of current deep subspace clustering models, we intend to distill knowledge from a deeper network trained on the external data, and transfer it into the shallower model. To reach the above two goals, we propose a new loss function to realize them in a joint framework. Moreover, we propose to construct a new pretext task for self-supervised training of the model, such that the representation ability of the model can be further improved. Extensive experiments are performed on four publicly available datasets, and experimental results clearly demonstrate the efficacy of our method, compared to state-of-the-art methods.
39

Kohonen, Teuvo, Samuel Kaski, and Harri Lappalainen. "Self-Organized Formation of Various Invariant-Feature Filters in the Adaptive-Subspace SOM." Neural Computation 9, no. 6 (August 1, 1997): 1321–44. http://dx.doi.org/10.1162/neco.1997.9.6.1321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The adaptive-subspace self-organizing map (ASSOM) is a modular neural network architecture, the modules of which learn to identify input patterns subject to some simple transformations. The learning process is unsupervised, competitive, and related to that of the traditional SOM (self-organizing map). Each neural module becomes adaptively specific to some restricted class of transformations, and modules close to each other in the network become tuned to similar features in an orderly fashion. If different transformations exist in the input signals, different subsets of ASSOM units become tuned to these transformation classes.
40

Lipshutz, David, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, and Dmitri B. Chklovskii. "A Biologically Plausible Neural Network for Multichannel Canonical Correlation Analysis." Neural Computation 33, no. 9 (August 19, 2021): 2309–52. http://dx.doi.org/10.1162/neco_a_01414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Cortical pyramidal neurons receive inputs from multiple distinct neural populations and integrate these inputs in separate dendritic compartments. We explore the possibility that cortical microcircuits implement canonical correlation analysis (CCA), an unsupervised learning method that projects the inputs onto a common subspace so as to maximize the correlations between the projections. To this end, we seek a multichannel CCA algorithm that can be implemented in a biologically plausible neural network. For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local. Starting from a novel CCA objective function, we derive an online optimization algorithm whose optimization steps can be implemented in a single-layer neural network with multicompartmental neurons and local non-Hebbian learning rules. We also derive an extension of our online CCA algorithm with adaptive output rank and output whitening. Interestingly, the extension maps onto a neural network whose neural architecture and synaptic updates resemble neural circuitry and non-Hebbian plasticity observed in the cortex.
41

Ahmadian, Kushan, and Marina Gavrilova. "Chaotic Neural Network for Biometric Pattern Recognition." Advances in Artificial Intelligence 2012 (August 30, 2012): 1–9. http://dx.doi.org/10.1155/2012/124176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Biometric pattern recognition emerged as one of the predominant research directions in modern security systems. It plays a crucial role in authentication of both real-world and virtual reality entities to allow system to make an informed decision on granting access privileges or providing specialized services. The major issues tackled by the researchers are arising from the ever-growing demands on precision and performance of security systems and at the same time increasing complexity of data and/or behavioral patterns to be recognized. In this paper, we propose to deal with both issues by introducing the new approach to biometric pattern recognition, based on chaotic neural network (CNN). The proposed method allows learning the complex data patterns easily while concentrating on the most important for correct authentication features and employs a unique method to train different classifiers based on each feature set. The aggregation result depicts the final decision over the recognized identity. In order to train accurate set of classifiers, the subspace clustering method has been used to overcome the problem of high dimensionality of the feature space. The experimental results show the superior performance of the proposed method.
42

Jeong, Sang-Su, Won-Kwang Park, and Young-Deuk Joh. "Construction of Full-View Data from Limited-View Data Using Artificial Neural Network in the Inverse Scattering Problem." Applied Sciences 12, no. 19 (September 29, 2022): 9801. http://dx.doi.org/10.3390/app12199801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Generally, the results of imaging the limited view data in the inverse scattering problem are relatively poor, compared to those of imaging the full view data. It is known that solving this problem mathematically is very difficult. Therefore, the main purpose of this study is to solve the inverse scattering problem in the limited view situation for some cases by using artificial intelligence. Thus, we attempted to develop an artificial intelligence suitable for problem-solving for the cases where the number of scatterers was 2 and 3, respectively, based on CNN (Convolutional Neural Networks) and ANN (Artificial Neural Network) models. As a result, when the ReLU function was used as the activation function and ANN consisted of four hidden layers, a learning model with a small mean square error of the output data through the ground truth data and this learning model could be developed. In order to verify the performance and overfitting of the developed learning model, limited view data that were not used for learning were newly created. The mean square error between output data obtained from this and ground truth data was also small, and the data distributions between the two data were similar. In addition, the locations of scatterers by imaging the out data with the subspace migration algorithm could be accurately found. To support this, data related to artificial neural network learning and imaging results using the subspace migration algorithm are attached.
43

Miao, Y., and Y. Hua. "Fast subspace tracking and neural network learning by a novel information criterion." IEEE Transactions on Signal Processing 46, no. 7 (July 1998): 1967–79. http://dx.doi.org/10.1109/78.700968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Yue, Han, Hangbin Wu, Ville Lehtola, Junyi Wei, and Chun Liu. "Indoor functional subspace division from point clouds based on graph neural network." International Journal of Applied Earth Observation and Geoinformation 127 (March 2024): 103656. http://dx.doi.org/10.1016/j.jag.2024.103656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zha, Yufei, Min Wu, Zhuling Qiu, Jingxian Sun, Peng Zhang, and Wei Huang. "Online Semantic Subspace Learning with Siamese Network for UAV Tracking." Remote Sensing 12, no. 2 (January 19, 2020): 325. http://dx.doi.org/10.3390/rs12020325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In urban environment monitoring, visual tracking on unmanned aerial vehicles (UAVs) can produce more applications owing to the inherent advantages, but it also brings new challenges for existing visual tracking approaches (such as complex background clutters, rotation, fast motion, small objects, and realtime issues due to camera motion and viewpoint changes). Based on the Siamese network, tracking can be conducted efficiently in recent UAV datasets. Unfortunately, the learned convolutional neural network (CNN) features are not discriminative when identifying the target from the background/clutter, In particular for the distractor, and cannot capture the appearance variations temporally. Additionally, occlusion and disappearance are also reasons for tracking failure. In this paper, a semantic subspace module is designed to be integrated into the Siamese network tracker to encode the local fine-grained details of the target for UAV tracking. More specifically, the target’s semantic subspace is learned online to adapt to the target in the temporal domain. Additionally, the pixel-wise response of the semantic subspace can be used to detect occlusion and disappearance of the target, and this enables reasonable updating to relieve model drifting. Substantial experiments conducted on challenging UAV benchmarks illustrate that the proposed method can obtain competitive results in both accuracy and efficiency when they are applied to UAV videos.
46

Farabbi, Andrea, and Luca Mainardi. "Domain-Specific Processing Stage for Estimating Single-Trail Evoked Potential Improves CNN Performance in Detecting Error Potential." Sensors 23, no. 22 (November 8, 2023): 9049. http://dx.doi.org/10.3390/s23229049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We present a novel architecture designed to enhance the detection of Error Potential (ErrP) signals during ErrP stimulation tasks. In the context of predicting ErrP presence, conventional Convolutional Neural Networks (CNNs) typically accept a raw EEG signal as input, encompassing both the information associated with the evoked potential and the background activity, which can potentially diminish predictive accuracy. Our approach involves advanced Single-Trial (ST) ErrP enhancement techniques for processing raw EEG signals in the initial stage, followed by CNNs for discerning between ErrP and NonErrP segments in the second stage. We tested different combinations of methods and CNNs. As far as ST ErrP estimation is concerned, we examined various methods encompassing subspace regularization techniques, Continuous Wavelet Transform, and ARX models. For the classification stage, we evaluated the performance of EEGNet, CNN, and a Siamese Neural Network. A comparative analysis against the method of directly applying CNNs to raw EEG signals revealed the advantages of our architecture. Leveraging subspace regularization yielded the best improvement in classification metrics, at up to 14% in balanced accuracy and 13.4% in F1-score.
47

Mao, Handong, Xiaodan Lin, Zhimao Li, Xiaobin Shen, and Wenzhao Zhao. "Anti-Icing System Performance Prediction Using POD and PSO-BP Neural Networks." Aerospace 11, no. 6 (May 26, 2024): 430. http://dx.doi.org/10.3390/aerospace11060430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The anti-icing system is important for ice protection and flight safety. Rapid prediction of the anti-icing system’s performance is critical to reducing the design time and increasing efficiency. The paper proposes a method to quickly predict the anti-icing performance of the hot air anti-icing system. The method is based on Proper Orthogonal Decomposition (POD) and Back Propagation (BP) neural networks improved with the Particle Swarm Optimization (PSO) algorithm to construct the PSO-BP neural network. POD is utilized for data compression and feature extraction for the skin temperature and runback water obtained by numerical calculation. A lower-dimensional approximation is derived from the projection subspace, which consists of a set of basis modes. The PSO-BP neural network establishes the mapping relationship between the flight condition parameters (including flight height, atmospheric temperature, flight speed, median volume diameter, and liquid water content) and the characteristic coefficients. The results show that the average absolute errors of prediction with the PSO-BP neural network model on skin temperature and runback water thickness are 3.87 K and 0.93 μm, respectively. The method can provide an effective tool for iteratively optimizing hot air anti-icing system design.
48

Kim, Jonghong, WonHee Lee, Sungdae Baek, Jeong-Ho Hong, and Minho Lee. "Incremental Learning for Online Data Using QR Factorization on Convolutional Neural Networks." Sensors 23, no. 19 (September 27, 2023): 8117. http://dx.doi.org/10.3390/s23198117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Catastrophic forgetting, which means a rapid forgetting of learned representations while learning new data/samples, is one of the main problems of deep neural networks. In this paper, we propose a novel incremental learning framework that can address the forgetting problem by learning new incoming data in an online manner. We develop a new incremental learning framework that can learn extra data or new classes with less catastrophic forgetting. We adopt the hippocampal memory process to the deep neural networks by defining the effective maximum of neural activation and its boundary to represent a feature distribution. In addition, we incorporate incremental QR factorization into the deep neural networks to learn new data with both existing labels and new labels with less forgetting. The QR factorization can provide the accurate subspace prior, and incremental QR factorization can reasonably express the collaboration between new data with both existing classes and new class with less forgetting. In our framework, a set of appropriate features (i.e., nodes) provides improved representation for each class. We apply our method to the convolutional neural network (CNN) for learning Cifar-100 and Cifar-10 datasets. The experimental results show that the proposed method efficiently alleviates the stability and plasticity dilemma in the deep neural networks by providing the performance stability of a trained network while effectively learning unseen data and additional new classes.
49

DA SILVA, IVAN NUNES, ANDRÉ NUNES DE SOUZA, and MÁRIO EDUARDO BORDON. "A NOVEL APPROACH FOR SOLVING CONSTRAINED NONLINEAR OPTIMIZATION PROBLEMS USING NEUROFUZZY SYSTEMS." International Journal of Neural Systems 11, no. 03 (June 2001): 281–86. http://dx.doi.org/10.1142/s0129065701000722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A neural network model for solving constrained nonlinear optimization problems with bounded variables is presented in this paper. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points. The network is shown to be completely stable and globally convergent to the solutions of constrained nonlinear optimization problems. A fuzzy logic controller is incorporated in the network to minimize convergence time. Simulation results are presented to validate the proposed approach.
50

Liu, Zhoufeng, Baorui Wang, Chunlei Li, Miao Yu, and Shumin Ding. "Fabric defect detection based on deep-feature and low-rank decomposition." Journal of Engineered Fibers and Fabrics 15 (January 2020): 155892502090302. http://dx.doi.org/10.1177/1558925020903026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Fabric defect detection plays an important role in controlling the quality of textile production. In this article, a novel fabric defect detection algorithm is proposed based on a multi-scale convolutional neural network and low-rank decomposition model. First, multi-scale convolutional neural network, which can extract the multi-scale deep feature of the image using multiple nonlinear transformations, is adopted to improve the characterization ability of fabric images with complex textures. The effective feature extraction makes the background lie in a low-rank subspace, and a sparse defect deviates from the low-rank subspace. Then, the low-rank decomposition model is constructed to decompose the feature matrix into the low-rank part (background) and the sparse part (salient defect). Finally, the saliency maps generated by the sparse matrix are segmented based on an improved optimal threshold to locate the fabric defect regions. Experimental results indicate that the feature extracted by the multi-scale convolutional neural network is more suitable for characterizing the fabric texture than the traditional hand-crafted feature extraction methods, such as histogram of oriented gradient, local binary pattern, and Gabor. The adopted low-rank decomposition model can effectively separate the defects from the background. Moreover, the proposed method is superior to state-of-the-art methods in terms of its adaptability and detection efficiency.

To the bibliography