Littérature scientifique sur le sujet « Neural network subspace »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Neural network subspace ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Neural network subspace"

1

Oja, Erkki. « NEURAL NETWORKS, PRINCIPAL COMPONENTS, AND SUBSPACES ». International Journal of Neural Systems 01, no 01 (janvier 1989) : 61–68. http://dx.doi.org/10.1142/s0129065789000475.

Texte intégral
Résumé :
A single neuron with Hebbian-type learning for the connection weights, and with nonlinear internal feedback, has been shown to extract the statistical principal components of its stationary input pattern sequence. A generalization of this model to a layer of neuron units is given, called the Subspace Network, which yields a multi-dimensional, principal component subspace. This can be used as an associative memory for the input vectors or as a module in nonsupervised learning of data clusters in the input space. It is also able to realize a powerful pattern classifier based on projections on class subspaces. Some classification results for natural textures are given.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Edraki, Marzieh, Nazanin Rahnavard et Mubarak Shah. « SubSpace Capsule Network ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 07 (3 avril 2020) : 10745–53. http://dx.doi.org/10.1609/aaai.v34i07.6703.

Texte intégral
Résumé :
Convolutional neural networks (CNNs) have become a key asset to most of fields in AI. Despite their successful performance, CNNs suffer from a major drawback. They fail to capture the hierarchy of spatial relation among different parts of an entity. As a remedy to this problem, the idea of capsules was proposed by Hinton. In this paper, we propose the SubSpace Capsule Network (SCN) that exploits the idea of capsule networks to model possible variations in the appearance or implicitly-defined properties of an entity through a group of capsule subspaces instead of simply grouping neurons to create capsules. A capsule is created by projecting an input feature vector from a lower layer onto the capsule subspace using a learnable transformation. This transformation finds the degree of alignment of the input with the properties modeled by the capsule subspace.We show that SCN is a general capsule network that can successfully be applied to both discriminative and generative models without incurring computational overhead compared to CNN during test time. Effectiveness of SCN is evaluated through a comprehensive set of experiments on supervised image classification, semi-supervised image classification and high-resolution image generation tasks using the generative adversarial network (GAN) framework. SCN significantly improves the performance of the baseline models in all 3 tasks.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zhi, Chuan, Ling Hua Guo, Mei Yun Zhang et Yi Shi. « Research on Dynamic Subspace Divided BP Neural Network Identification Method of Color Space Transform Model ». Advanced Materials Research 174 (décembre 2010) : 97–100. http://dx.doi.org/10.4028/www.scientific.net/amr.174.97.

Texte intégral
Résumé :
In order to improve the precision for BP neural network model color space conversion, this paper takes RGB color space and CIE L*a*b* color space as an example. Based on the input value, the color space is dynamically divided into many subspaces. To adopt the BP neural network in the subspace can effectively avoiding the local optimum of BP neural network in the whole color space and greatly improving the color space conversion precision.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Funabashi, Masatoshi. « Synthetic Modeling of Autonomous Learning with a Chaotic Neural Network ». International Journal of Bifurcation and Chaos 25, no 04 (avril 2015) : 1550054. http://dx.doi.org/10.1142/s0218127415500546.

Texte intégral
Résumé :
We investigate the possible role of intermittent chaotic dynamics called chaotic itinerancy, in interaction with nonsupervised learnings that reinforce and weaken the neural connection depending on the dynamics itself. We first performed hierarchical stability analysis of the Chaotic Neural Network model (CNN) according to the structure of invariant subspaces. Irregular transition between two attractor ruins with positive maximum Lyapunov exponent was triggered by the blowout bifurcation of the attractor spaces, and was associated with riddled basins structure. We secondly modeled two autonomous learnings, Hebbian learning and spike-timing-dependent plasticity (STDP) rule, and simulated the effect on the chaotic itinerancy state of CNN. Hebbian learning increased the residence time on attractor ruins, and produced novel attractors in the minimum higher-dimensional subspace. It also augmented the neuronal synchrony and established the uniform modularity in chaotic itinerancy. STDP rule reduced the residence time on attractor ruins, and brought a wide range of periodicity in emerged attractors, possibly including strange attractors. Both learning rules selectively destroyed and preserved the specific invariant subspaces, depending on the neuron synchrony of the subspace where the orbits are situated. Computational rationale of the autonomous learning is discussed in connectionist perspective.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Mahomud, V. A., A. S. Hadi, N. K. Wafi et S. M. R. Taha. « DIRECTION OF ARRIVAL USING PCA NEURALNETWORKS ». Journal of Engineering 10, no 1 (13 mars 2024) : 83–89. http://dx.doi.org/10.31026/j.eng.2004.01.07.

Texte intégral
Résumé :
This paper adapted the neural network for the estimating of the direction of arrival (DOA). It usesan unsupervised adaptive neural network with APEX algorithm to extract the principal componentsthat in turn, are used by Capon method to estimate the DOA, where by the PCA neural network wetake signal subspace only and use it in Capon (i.e. we will ignore the noise subspace, and take thesignal subspace only)
Styles APA, Harvard, Vancouver, ISO, etc.
6

Menghi, Nicholas, Kemal Kacar et Will Penny. « Multitask learning over shared subspaces ». PLOS Computational Biology 17, no 7 (6 juillet 2021) : e1009092. http://dx.doi.org/10.1371/journal.pcbi.1009092.

Texte intégral
Résumé :
This paper uses constructs from machine learning to define pairs of learning tasks that either shared or did not share a common subspace. Human subjects then learnt these tasks using a feedback-based approach and we hypothesised that learning would be boosted for shared subspaces. Our findings broadly supported this hypothesis with either better performance on the second task if it shared the same subspace as the first, or positive correlations over task performance for shared subspaces. These empirical findings were compared to the behaviour of a Neural Network model trained using sequential Bayesian learning and human performance was found to be consistent with a minimal capacity variant of this model. Networks with an increased representational capacity, and networks without Bayesian learning, did not show these transfer effects. We propose that the concept of shared subspaces provides a useful framework for the experimental study of human multitask and transfer learning.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Cao, Xiang, et A.-long Yu. « Multi-AUV Cooperative Target Search Algorithm in 3-D Underwater Workspace ». Journal of Navigation 70, no 6 (30 juin 2017) : 1293–311. http://dx.doi.org/10.1017/s0373463317000376.

Texte intégral
Résumé :
To improve the efficiency of multiple Autonomous Underwater Vehicles (multi-AUV) cooperative target search in a Three-Dimensional (3D) underwater workspace, an integrated algorithm is proposed by combining a Self-Organising Map (SOM), neural network and Glasius Bioinspired Neural Network (GBNN). With this integrated algorithm, the 3D underwater workspace is first divided into subspaces dependent on the abilities of the AUV team members. After that, tasks are allocated to each subspace for an AUV by SOM. Finally, AUVs move to the assigned subspace in the shortest way and start their search task by GBNN. This integrated algorithm, by avoiding overlapping search paths and raising the coverage rate, can reduce energy consumption of the whole multi-AUV system. The simulation results show that the proposed algorithm is capable of guiding multi-AUV to achieve a multiple target search task with higher efficiency and adaptability compared with a more traditional bioinspired neural network algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Laaksonen, Jorma, et Erkki Oja. « Learning Subspace Classifiers and Error-Corrective Feature Extraction ». International Journal of Pattern Recognition and Artificial Intelligence 12, no 04 (juin 1998) : 423–36. http://dx.doi.org/10.1142/s0218001498000270.

Texte intégral
Résumé :
Subspace methods are a powerful class of statistical pattern classification algorithms. The subspaces form semiparametric representations of the pattern classes in the form of principal components. In this sense, subspace classification methods are an application of classical optimal data compression techniques. Additionally, the subspace formalism can be given a neural network interpretation. There are learning versions of the subspace classification methods, in which error-driven learning procedures are applied to the subspaces in order to reduce the number of misclassified vectors. An algorithm for iterative selection of the subspace dimensions is presented in this paper. Likewise, a modified formula for calculating the projection lengths in the subspaces is investigated. The principle of adaptive learning in subspace methods can further be applied to feature extraction. In our work, we have studied two adaptive feature extraction schemes. The adaptation process is directed by errors occurring in the classifier. Unlike most traditional classifier models which take the preceding feature extraction stage as given, this scheme allows for reducing the loss of information in the feature extraction stage. The enhanced overall classification performance resulting from the added adaptivity is demonstrated with experiments in which recognition of handwritten digits has been used as an exemplary application.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Chandar, Sarath, Mitesh M. Khapra, Hugo Larochelle et Balaraman Ravindran. « Correlational Neural Networks ». Neural Computation 28, no 2 (février 2016) : 257–85. http://dx.doi.org/10.1162/neco_a_00801.

Texte intégral
Résumé :
Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)–based approaches and autoencoder (AE)–based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Kizaric, Ben, et Daniel Pimentel-Alarcón. « Principle Component Trees and Their Persistent Homology ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 12 (24 mars 2024) : 13220–29. http://dx.doi.org/10.1609/aaai.v38i12.29222.

Texte intégral
Résumé :
Low dimensional models like PCA are often used to simplify complex datasets by learning a single approximating subspace. This paradigm has expanded to union of subspaces models, like those learned by subspace clustering. In this paper, we present Principal Component Trees (PCTs), a graph structure that generalizes these ideas to identify mixtures of components that together describe the subspace structure of high-dimensional datasets. Each node in a PCT corresponds to a principal component of the data, and the edges between nodes indicate the components that must be mixed to produce a subspace that approximates a portion of the data. In order to construct PCTs, we propose two angle-distribution hypothesis tests to detect subspace clusters in the data. To analyze, compare, and select the best PCT model, we define two persistent homology measures that describe their shape. We show our construction yields two key properties of PCTs, namely ancestral orthogonality and non-decreasing singular values. Our main theoretical results show that learning PCTs reduces to PCA under multivariate normality, and that PCTs are efficient parameterizations of intersecting union of subspaces. Finally, we use PCTs to analyze neural network latent space, word embeddings, and reference image datasets.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Neural network subspace"

1

Gaya, Jean-Baptiste. « Subspaces of Policies for Deep Reinforcement Learning ». Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS075.

Texte intégral
Résumé :
Ce travail explore les "Sous-espaces de politiques pour l'apprentissage par renforcement profond", introduisant une approche novatrice pour relever les défis d'adaptabilité et de généralisation dans l'apprentissage par renforcement profond (RL). Situé dans le contexte plus large de la révolution de l'IA, cette recherche met l'accent sur la transition vers des modèles évolutifs et généralisables en RL, inspirée par les avancées des architectures et méthodologies d'apprentissage profond. Elle identifie les limites des applications actuelles de RL, notamment pour atteindre une généralisation à travers diverses tâches et domaines, proposant un changement de paradigme vers des méthodes adaptatives. La recherche aborde d'abord la généralisation zero-shot, évaluant la maturité de l'apprentissage profond par renforcement pour généraliser à des tâches inédites sans entraînement supplémentaire. À travers des investigations sur la généralisation morphologique et l'apprentissage par renforcement multi-objectif (MORL), des limitations critiques des méthodes actuelles sont identifiées, et de nouvelles approches pour améliorer les capacités de généralisation sont introduites. Notamment, les travaux sur le moyennage des poids en MORL présentent une méthode simple pour optimiser plusieurs objectifs, montrant un potentiel prometteur pour une exploration future.La contribution principale réside dans le développement d'un cadre de "Sous-espaces de politiques". Cette approche novatrice préconise le maintien d'un paysage dynamique de solutions dans un espace paramétrique plus petit, tirant profit du moyennage des poids des réseaux de neurones. Une diversité fonctionnelle est obtenue avec un minimum de surcharge computationnelle grâce à l'interpolation des poids entre les paramètres des réseaux de neurones. Cette méthodologie est explorée à travers diverses expériences et contextes, y compris l'adaptation few-shot et l'apprentissage par renforcement continu, démontrant son efficacité et son potentiel d'évolutivité et d'adaptabilité dans des tâches RL complexes.La conclusion revient sur le parcours de la recherche, soulignant les implications du cadre des "Sous-espaces de politiques" pour les futures recherches en IA. Plusieurs directions futures sont esquissées, notamment l'amélioration de l'évolutivité des méthodes de sous-espaces, l'exploration de leur potentiel dans des contextes décentralisés, et la prise en compte des défis en matière d'efficacité et d'interprétabilité. Cette contribution fondamentale au domaine du RL ouvre la voie à des solutions innovantes pour relever les défis de longue date en matière d'adaptabilité et de généralisation, marquant une étape significative vers le développement d'agents autonomes capables de naviguer de manière transparente dans un large éventail de tâches
This work explores "Subspaces of Policies for Deep Reinforcement Learning," introducing an innovative approach to address adaptability and generalization challenges in deep reinforcement learning (RL). Situated within the broader context of the AI revolution, this research emphasizes the shift toward scalable and generalizable models in RL, inspired by advancements in deep learning architectures and methodologies. It identifies the limitations of current RL applications, particularly in achieving generalization across varied tasks and domains, proposing a paradigm shift towards adaptive methods.The research initially tackles zero-shot generalization, assessing deep RL's maturity in generalizing across unseen tasks without additional training. Through investigations into morphological generalization and multi-objective reinforcement learning (MORL), critical limitations in current methods are identified, and novel approaches to improve generalization capabilities are introduced. Notably, work on weight averaging in MORL presents a straightforward method for optimizing multiple objectives, showing promise for future exploration.The core contribution lies in developing a "Subspace of Policies" framework. This novel approach advocates for maintaining a dynamic landscape of solutions within a smaller parametric space, taking profit of neural network weight averaging. Functional diversity is achieved with minimal computational overhead through weight interpolation between neural network parameters. This methodology is explored through various experiments and settings, including few-shot adaptation and continual reinforcement learning, demonstrating its efficacy and potential for scalability and adaptability in complex RL tasks.The conclusion reflects on the research journey, emphasizing the implications of the "Subspaces of Policies" framework for future AI research. Several future directions are outlined, including enhancing the scalability of subspace methods, exploring their potential in decentralized settings, and addressing challenges in efficiency and interpretability. This foundational contribution to the field of RL paves the way for innovative solutions to long-standing challenges in adaptability and generalization, marking a significant step forward in the development of autonomous agents capable of navigating a wide array of tasks seamlessly
Styles APA, Harvard, Vancouver, ISO, etc.
2

Del, Real Tamariz Annabell. « Modelagem computacional de dados e controle inteligente no espaço de estado ». [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260207.

Texte intégral
Résumé :
Orientador: Celso Pascoli Bottura
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-04T18:33:31Z (GMT). No. of bitstreams: 1 DelRealTamariz_Annabell_D.pdf: 5783881 bytes, checksum: 21a1a2e27552398a982a934513988a24 (MD5) Previous issue date: 2005
Resumo: Este estudo apresenta contribuições para modelagem computacional de dados multivariáveis no espaço de estado, tanto com sistemas lineares invariantes como com variantes no tempo. Propomos para modelagem determinística-estocástica de dados ruidosos, o Algoritmo MOESP_AOKI. Propomos, utilizando Redes Neurais Recorrentes multicamadas, algoritmos para resolver a Equação Algébrica de Riccati Discreta bem como a Inequação Algébrica de Riccati Discreta, via Desigualdades Matriciais Lineares. Propomos um esquema de controle adaptativo com Escalonamento de Ganhos, baseado em Redes Neurais, para sistemas multivariáveis discretos variantes no tempo, identificados pelo algoritmo MOESP_VAR, também proposto nesta tese. Em síntese, uma estrutura de controle inteligente para sistemas discretos multivariáveis variantes no tempo, através de uma abordagem que pode ser chamada ILPV (Intelligent Linear Parameter Varying), é proposta e implementada. Um controlador LPV Inteligente, para dados computacionalmente modelados pelo algoritmo MOESP_VAR, é concretizado, implementado e testado com bons resultados
Abstract: This study presents contributions for state space multivariable computational data modelling with discrete time invariant as well as with time varying linear systems. A proposal for Deterministic-Estocastica Modelling of noisy data, MOESP_AOKI Algorithm, is made. We present proposals forsolving the Discrete-Time Algebraic Riccati Equation as well as the associate Linear Matrix Inequalityusing a multilayer Recurrent Neural Network approaches. An Intelligent Linear Parameter Varying(ILPV) control approach for multivariable discrete Linear Time Varying (LTV) systems identified bythe MOESP_VAR algorithm, are both proposed. A gain scheduling adaptive control scheme based on neural networks is designed to tune on-line the optimal controllers. In synthesis, an Intelligent Linear Parameter Varying (ILPV) Control approach for multivariable discrete Linear Time Varying Systems (LTV), identified by the algorithm MOESP_VAR, is proposed. This way an Intelligent LPV Control for multivariable data computationally modeled via the MOESP_VAR algorithm is structured, implemented and tested with good results
Doutorado
Automação
Doutor em Engenharia Elétrica
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Neural network subspace"

1

Zhang, Yi, et Zhou Jiliu, dir. Subspace learning of neural networks. Boca Raton : CRC Press, 2011.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Yi, Zhang, Jian Cheng Lv et Jiliu Zhou. Subspace Learning of Neural Networks. Taylor & Francis Group, 2018.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Yi, Zhang, Jian Cheng Lv et Jiliu Zhou. Subspace Learning of Neural Networks. Taylor & Francis Group, 2017.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Yi, Zhang, Jian Cheng Lv et Jiliu Zhou. Subspace Learning of Neural Networks. Taylor & Francis Group, 2018.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Yi, Zhang, Jian Cheng Lv et Jiliu Zhou. Subspace Learning of Neural Networks. Taylor & Francis Group, 2018.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yi, Zhang, Jian Cheng Lv et Jiliu Zhou. Subspace Learning of Neural Networks. Taylor & Francis Group, 2018.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Neural network subspace"

1

Han, Min, et Meiling Xu. « Subspace Echo State Network for Multivariate Time Series Prediction ». Dans Neural Information Processing, 681–88. Berlin, Heidelberg : Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34500-5_80.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Hu, Yafeng, Feng Zhu et Xianda Zhang. « A Novel Approach for License Plate Recognition Using Subspace Projection and Probabilistic Neural Network ». Dans Advances in Neural Networks – ISNN 2005, 216–21. Berlin, Heidelberg : Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11427445_34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Rosso, Marco M., Angelo Aloisio, Raffaele Cucuzza, Dag P. Pasca, Giansalvo Cirrincione et Giuseppe C. Marano. « Structural Health Monitoring with Artificial Neural Network and Subspace-Based Damage Indicators ». Dans Lecture Notes in Civil Engineering, 524–37. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-20241-4_37.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Smart, Michael H. W. « Rotation invariant IR object recognition using adaptive kernel subspace projections with a neural network ». Dans Biological and Artificial Computation : From Neuroscience to Technology, 1028–37. Berlin, Heidelberg : Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0032562.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Hafiz Ahamed, Md, et Md Ali Hossain. « Spatial-Spectral Kernel Convolutional Neural Network-Based Subspace Detection for the Task of Hyperspectral Image Classification ». Dans Proceedings of International Conference on Information and Communication Technology for Development, 163–70. Singapore : Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-7528-8_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Laaksonen, Jorma, et Erkki Oja. « Subspace dimension selection and averaged learning subspace method in handwritten digit classification ». Dans Artificial Neural Networks — ICANN 96, 227–32. Berlin, Heidelberg : Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61510-5_41.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Luo, Fa-Long, et Rolf Unbehauen. « Unsupervised learning of the minor subspace ». Dans Artificial Neural Networks — ICANN 96, 489–94. Berlin, Heidelberg : Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61510-5_84.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Di Giacomo, M., et G. Martinelli. « Signal classification by subspace neural networks ». Dans Neural Nets WIRN Vietri-99, 200–205. London : Springer London, 1999. http://dx.doi.org/10.1007/978-1-4471-0877-1_20.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Pizarro, Pablo, et Miguel Figueroa. « Subspace-Based Face Recognition on an FPGA ». Dans Engineering Applications of Neural Networks, 84–89. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23957-1_10.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Teixeira, Ana R., Ana Maria Tomé et E. W. Lang. « Feature Extraction Using Linear and Non-linear Subspace Techniques ». Dans Artificial Neural Networks – ICANN 2009, 115–24. Berlin, Heidelberg : Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04277-5_12.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Neural network subspace"

1

Yongqiang Ye et Danwei Wang. « Neural-Network Static Learning Controller in DCT Subspace ». Dans 4th International Conference on Control and Automation. Final Program and Book of Abstracts. IEEE, 2003. http://dx.doi.org/10.1109/icca.2003.1595067.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Saeed, Kashif, Nazih Mechbal, Gerard Coffignal et Michel Verge. « Subspace-based damage localization using Artificial Neural Network ». Dans Automation (MED 2010). IEEE, 2010. http://dx.doi.org/10.1109/med.2010.5547729.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Heeyoul Choi et Seungjin Choi. « Relative Gradient Learning for Independent Subspace Analysis ». Dans The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.246890.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

« NEURAL NETWORK BASED HAMMERSTEIN SYSTEM IDENTIFICATION USING PARTICLE SWARM SUBSPACE ALGORITHM ». Dans International Conference on Neural Computation. SciTePress - Science and and Technology Publications, 2010. http://dx.doi.org/10.5220/0003072401820189.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Chen, Shuyu, Wei Li, Jun Liu, Haoyu Jin et Xuehui Yin. « Network Intrusion Detection Based on Subspace Clustering and BP Neural Network ». Dans 2021 8th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2021 7th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). IEEE, 2021. http://dx.doi.org/10.1109/cscloud-edgecom52276.2021.00022.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yongbo Zhang, Yanping Li et Huakui Wang. « Bilinear Neural Network Tracking Subspace for Blind Multiuser Detection ». Dans 2006 6th World Congress on Intelligent Control and Automation. IEEE, 2006. http://dx.doi.org/10.1109/wcica.2006.1713322.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Sellar, R., et S. Batill. « Concurrent Subspace Optimization using gradient-enhanced neural network approximations ». Dans 6th Symposium on Multidisciplinary Analysis and Optimization. Reston, Virigina : American Institute of Aeronautics and Astronautics, 1996. http://dx.doi.org/10.2514/6.1996-4019.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Guojun Gan, Jianhong Wu et Zijiang Yang. « PARTCAT : A Subspace Clustering Algorithm for High Dimensional Categorical Data ». Dans The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.247041.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Jiang-wei, Ge, Zhao Yong-jun et Wang Feng. « A Neural Network Approach for Subspace Decomposition and Its Dimension Estimation ». Dans 2008 Pacific-Asia Workshop on Computational Intelligence and Industrial Application. PACIIA 2008. IEEE, 2008. http://dx.doi.org/10.1109/paciia.2008.109.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Samarakoon, Lahiru, et Khe Chai Sim. « Subspace LHUC for Fast Adaptation of Deep Neural Network Acoustic Models ». Dans Interspeech 2016. ISCA, 2016. http://dx.doi.org/10.21437/interspeech.2016-1249.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie