Статті в журналах з теми "Unsupervised Neural Network"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Unsupervised Neural Network.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Unsupervised Neural Network".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Banzi, Jamal, Isack Bulugu, and Zhongfu Ye. "Deep Predictive Neural Network: Unsupervised Learning for Hand Pose Estimation." International Journal of Machine Learning and Computing 9, no. 4 (August 2019): 432–39. http://dx.doi.org/10.18178/ijmlc.2019.9.4.822.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Vamaraju, Janaki, and Mrinal K. Sen. "Unsupervised physics-based neural networks for seismic migration." Interpretation 7, no. 3 (August 1, 2019): SE189—SE200. http://dx.doi.org/10.1190/int-2018-0230.1.

Повний текст джерела
Анотація:
We have developed a novel framework for combining physics-based forward models and neural networks to advance seismic processing and inversion algorithms. Migration is an effective tool in seismic data processing and imaging. Over the years, the scope of these algorithms has broadened; today, migration is a central step in the seismic data processing workflow. However, no single migration technique is suitable for all kinds of data and all styles of acquisition. There is always a compromise on the accuracy, cost, and flexibility of these algorithms. On the other hand, machine-learning algorithms and artificial intelligence methods have been found immensely successful in applications in which big data are available. The applicability of these algorithms is being extensively investigated in scientific disciplines such as exploration geophysics with the goal of reducing exploration and development costs. In this context, we have used a special kind of unsupervised recurrent neural network and its variants, Hopfield neural networks and the Boltzmann machine, to solve the problems of Kirchhoff and reverse time migrations. We use the network to migrate seismic data in a least-squares sense using simulated annealing to globally optimize the cost function of the neural network. The weights and biases of the neural network are derived from the physics-based forward models that are used to generate seismic data. The optimal configuration of the neural network after training corresponds to the minimum energy of the network and thus gives the reflectivity solution of the migration problem. Using synthetic examples, we determine that (1) Hopfield neural networks are fast and efficient and (2) they provide reflectivity images with mitigated migration artifacts and improved spatial resolution. Specifically, the presented approach minimizes the artifacts that arise from limited aperture, low subsurface illumination, coarse sampling, and gaps in the data.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lin, Baihan. "Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers." Entropy 24, no. 1 (December 28, 2021): 59. http://dx.doi.org/10.3390/e24010059.

Повний текст джерела
Анотація:
Inspired by the adaptation phenomenon of neuronal firing, we propose the regularity normalization (RN) as an unsupervised attention mechanism (UAM) which computes the statistical regularity in the implicit space of neural networks under the Minimum Description Length (MDL) principle. Treating the neural network optimization process as a partially observable model selection problem, the regularity normalization constrains the implicit space by a normalization factor, the universal code length. We compute this universal code incrementally across neural network layers and demonstrate the flexibility to include data priors such as top-down attention and other oracle information. Empirically, our approach outperforms existing normalization methods in tackling limited, imbalanced and non-stationary input distribution in image classification, classic control, procedurally-generated reinforcement learning, generative modeling, handwriting generation and question answering tasks with various neural network architectures. Lastly, the unsupervised attention mechanisms is a useful probing tool for neural networks by tracking the dependency and critical learning stages across layers and recurrent time steps of deep networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Jothilakshmi, S., V. Ramalingam, and S. Palanivel. "Unsupervised Speaker Segmentation using Autoassociative Neural Network." International Journal of Computer Applications 1, no. 7 (February 25, 2010): 24–30. http://dx.doi.org/10.5120/167-293.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhang, Xiaowei, Jianming Lu, Nuo Zhang, and Takashi Yahagi. "Convolutive Nonlinear Separation with Unsupervised Neural Network." IEEJ Transactions on Electronics, Information and Systems 126, no. 8 (2006): 942–49. http://dx.doi.org/10.1541/ieejeiss.126.942.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Intrator, Nathan. "Feature Extraction Using an Unsupervised Neural Network." Neural Computation 4, no. 1 (January 1992): 98–107. http://dx.doi.org/10.1162/neco.1992.4.1.98.

Повний текст джерела
Анотація:
A novel unsupervised neural network for dimensionality reduction that seeks directions emphasizing multimodality is presented, and its connection to exploratory projection pursuit methods is discussed. This leads to a new statistical insight into the synaptic modification equations governing learning in Bienenstock, Cooper, and Munro (BCM) neurons (1982). The importance of a dimensionality reduction principle based solely on distinguishing features is demonstrated using a phoneme recognition experiment. The extracted features are compared with features extracted using a backpropagation network.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Pedrycz, W., and J. Waletzky. "Neural-network front ends in unsupervised learning." IEEE Transactions on Neural Networks 8, no. 2 (March 1997): 390–401. http://dx.doi.org/10.1109/72.557690.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dong-Chul Park. "Centroid neural network for unsupervised competitive learning." IEEE Transactions on Neural Networks 11, no. 2 (March 2000): 520–28. http://dx.doi.org/10.1109/72.839021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ma, Chao, Yun Gu, Chen Gong, Jie Yang, and Deying Feng. "Unsupervised Video Hashing via Deep Neural Network." Neural Processing Letters 47, no. 3 (March 17, 2018): 877–90. http://dx.doi.org/10.1007/s11063-018-9812-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Gunhan, Atilla E., László P. Csernai, and Jørgen Randrup. "UNSUPERVISED COMPETITIVE LEARNING IN NEURAL NETWORKS." International Journal of Neural Systems 01, no. 02 (January 1989): 177–86. http://dx.doi.org/10.1142/s0129065789000086.

Повний текст джерела
Анотація:
We study an idealized neural network that may approximate certain neurophysiological features of natural neural systems. The network contains a mutual lateral inhibition and is subjected to unsupervised learning by means of a Hebb-type learning principle. Its learning ability is analysed as a function of the strength of lateral inhibition and the training set.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Jo, Sumin, Wookyung Sun, Bokyung Kim, Sunhee Kim, Junhee Park, and Hyungsoon Shin. "Memristor Neural Network Training with Clock Synchronous Neuromorphic System." Micromachines 10, no. 6 (June 8, 2019): 384. http://dx.doi.org/10.3390/mi10060384.

Повний текст джерела
Анотація:
Memristor devices are considered to have the potential to implement unsupervised learning, especially spike timing-dependent plasticity (STDP), in the field of neuromorphic hardware research. In this study, a neuromorphic hardware system for multilayer unsupervised learning was designed, and unsupervised learning was performed with a memristor neural network. We showed that the nonlinear characteristic memristor neural network can be trained by unsupervised learning only with the correlation between inputs and outputs. Moreover, a method to train nonlinear memristor devices in a supervised manner, named guide training, was devised. Memristor devices have a nonlinear characteristic, which makes implementing machine learning algorithms, such as backpropagation, difficult. The guide-training algorithm devised in this paper updates the synaptic weights by only using the correlations between inputs and outputs, and therefore, neither complex mathematical formulas nor computations are required during the training. Thus, it is considered appropriate to train a nonlinear memristor neural network. All training and inference simulations were performed using the designed neuromorphic hardware system. With the system and memristor neural network, the image classification was successfully done using both the Hebbian unsupervised training and guide supervised training methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Bernert, Marie, and Blaise Yvert. "An Attention-Based Spiking Neural Network for Unsupervised Spike-Sorting." International Journal of Neural Systems 29, no. 08 (September 25, 2019): 1850059. http://dx.doi.org/10.1142/s0129065718500594.

Повний текст джерела
Анотація:
Bio-inspired computing using artificial spiking neural networks promises performances outperforming currently available computational approaches. Yet, the number of applications of such networks remains limited due to the absence of generic training procedures for complex pattern recognition, which require the design of dedicated architectures for each situation. We developed a spike-timing-dependent plasticity (STDP) spiking neural network (SSN) to address spike-sorting, a central pattern recognition problem in neuroscience. This network is designed to process an extracellular neural signal in an online and unsupervised fashion. The signal stream is continuously fed to the network and processed through several layers to output spike trains matching the truth after a short learning period requiring only few data. The network features an attention mechanism to handle the scarcity of action potential occurrences in the signal, and a threshold adaptation mechanism to handle patterns with different sizes. This method outperforms two existing spike-sorting algorithms at low signal-to-noise ratio (SNR) and can be adapted to process several channels simultaneously in the case of tetrode recordings. Such attention-based STDP network applied to spike-sorting opens perspectives to embed neuromorphic processing of neural data in future brain implants.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Gu, Ming. "The Algorithm of Quadratic Junction Neural Network." Applied Mechanics and Materials 462-463 (November 2013): 438–42. http://dx.doi.org/10.4028/www.scientific.net/amm.462-463.438.

Повний текст джерела
Анотація:
Neural network with quadratic junction was described. Structure, properties and unsupervised learning rules of the neural network were discussed. An ART-based hierarchical clustering algorithm using this kind of neural networks was suggested. The algorithm can determine the number of clusters and clustering data. A 2-D artificial data set is used to illustrate and compare the effectiveness of the proposed algorithm and K-means algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Li, Lin, Shengsheng Yu, Luo Zhong, and Xiaozhen Li. "Multilingual Text Detection with Nonlinear Neural Network." Mathematical Problems in Engineering 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/431608.

Повний текст джерела
Анотація:
Multilingual text detection in natural scenes is still a challenging task in computer vision. In this paper, we apply an unsupervised learning algorithm to learn language-independent stroke feature and combine unsupervised stroke feature learning and automatically multilayer feature extraction to improve the representational power of text feature. We also develop a novel nonlinear network based on traditional Convolutional Neural Network that is able to detect multilingual text regions in the images. The proposed method is evaluated on standard benchmarks and multilingual dataset and demonstrates improvement over the previous work.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Zhuang, Chengxu, Siming Yan, Aran Nayebi, Martin Schrimpf, Michael C. Frank, James J. DiCarlo, and Daniel L. K. Yamins. "Unsupervised neural network models of the ventral visual stream." Proceedings of the National Academy of Sciences 118, no. 3 (January 11, 2021): e2014196118. http://dx.doi.org/10.1073/pnas.2014196118.

Повний текст джерела
Анотація:
Deep neural networks currently provide the best quantitative models of the response patterns of neurons throughout the primate ventral visual stream. However, such networks have remained implausible as a model of the development of the ventral stream, in part because they are trained with supervised methods requiring many more labels than are accessible to infants during development. Here, we report that recent rapid progress in unsupervised learning has largely closed this gap. We find that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today’s best supervised methods and that the mapping of these neural network models’ hidden layers is neuroanatomically consistent across the ventral stream. Strikingly, we find that these methods produce brain-like representations even when trained solely with real human child developmental data collected from head-mounted cameras, despite the fact that these datasets are noisy and limited. We also find that semisupervised deep contrastive embeddings can leverage small numbers of labeled examples to produce representations with substantially improved error-pattern consistency to human behavior. Taken together, these results illustrate a use of unsupervised learning to provide a quantitative model of a multiarea cortical brain system and present a strong candidate for a biologically plausible computational theory of primate sensory learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Lu, Thomas T. "Self-organizing optical neural network for unsupervised learning." Optical Engineering 29, no. 9 (1990): 1107. http://dx.doi.org/10.1117/12.55702.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Liao, Tao, Lei Xue, Weisheng Hu, and Lilin Yi. "Unsupervised Learning for Neural Network-Based Blind Equalization." IEEE Photonics Technology Letters 32, no. 10 (May 15, 2020): 569–72. http://dx.doi.org/10.1109/lpt.2020.2985307.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Furao, Shen, and Osamu Hasegawa. "A Growing Neural Network for Online Unsupervised Learning." Journal of Advanced Computational Intelligence and Intelligent Informatics 8, no. 2 (March 20, 2004): 121–29. http://dx.doi.org/10.20965/jaciii.2004.p0121.

Повний текст джерела
Анотація:
New online learning is proposed for unsupervised classification and topology representation. The combination of similarity threshold and local accumulated error suits the algorithm for nonstationary data distribution. A novel online criterion for removal of nodes is proposed to classify the data set well and eliminate noise. The use of a utility parameter, error-radius, is able to judge if insertion is successful and control the increase of nodes. As shown in experiment results, the system can represent the topological structure of unsupervised online data, report the reasonable number of clusters, and give typical prototype patterns of every cluster without priori conditions such as a suitable number of nodes or a good initial codebook.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Hung, Shih-Lin, and C. M. Lai. "Unsupervised fuzzy neural network structural active pulse controller." Earthquake Engineering & Structural Dynamics 30, no. 4 (2001): 465–84. http://dx.doi.org/10.1002/eqe.16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Zhang, Lei, Feng Qian, Jie Chen, and Shu Zhao. "An Unsupervised Rapid Network Alignment Framework via Network Coarsening." Mathematics 11, no. 3 (January 21, 2023): 573. http://dx.doi.org/10.3390/math11030573.

Повний текст джерела
Анотація:
Network alignment aims to identify the correspondence of nodes between two or more networks. It is the cornerstone of many network mining tasks, such as cross-platform recommendation and cross-network data aggregation. Recently, with the development of network representation learning techniques, researchers have proposed many embedding-based network alignment methods. The effect is better than traditional methods. However, several issues and challenges remain for network alignment tasks, such as lack of labeled data, mapping across network embedding spaces, and computational efficiency. Based on the graph neural network (GNN), we propose the URNA (unsupervised rapid network alignment) framework to achieve an effective balance between accuracy and efficiency. There are two phases: model training and network alignment. We exploit coarse networks to accelerate the training of GNN after first compressing the original networks into small networks. We also use parameter sharing to guarantee the consistency of embedding spaces and an unsupervised loss function to update the parameters. In the network alignment phase, we first use a once-pass forward propagation to learn node embeddings of original networks, and then we use multi-order embeddings from the outputs of all convolutional layers to calculate the similarity of nodes between the two networks via vector inner product for alignment. Experimental results on real-world datasets show that the proposed method can significantly reduce running time and memory requirements while guaranteeing alignment performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Aragon-Calvo, M. A., and J. C. Carvajal. "Self-supervised learning with physics-aware neural networks – I. Galaxy model fitting." Monthly Notices of the Royal Astronomical Society 498, no. 3 (September 7, 2020): 3713–19. http://dx.doi.org/10.1093/mnras/staa2228.

Повний текст джерела
Анотація:
ABSTRACT Estimating the parameters of a model describing a set of observations using a neural network is, in general, solved in a supervised way. In cases when we do not have access to the model’s true parameters, this approach can not be applied. Standard unsupervised learning techniques, on the other hand, do not produce meaningful or semantic representations that can be associated with the model’s parameters. Here we introduce a novel self-supervised hybrid network architecture that combines traditional neural network elements with analytic or numerical models, which represent a physical process to be learned by the system. Self-supervised learning is achieved by generating an internal representation equivalent to the parameters of the physical model. This semantic representation is used to evaluate the model and compare it to the input data during training. The semantic autoencoder architecture described here shares the robustness of neural networks while including an explicit model of the data, learns in an unsupervised way, and estimates, by construction, parameters with direct physical interpretation. As an illustrative application, we perform unsupervised learning for 2D model fitting of exponential light profiles and evaluate the performance of the network as a function of network size and noise.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Spratling, M. W., and M. H. Johnson. "Preintegration Lateral Inhibition Enhances Unsupervised Learning." Neural Computation 14, no. 9 (September 1, 2002): 2157–79. http://dx.doi.org/10.1162/089976602320264033.

Повний текст джерела
Анотація:
A large and influential class of neural network architectures uses postintegration lateral inhibition as a mechanism for competition. We argue that these algorithms are computationally deficient in that they fail to generate, or learn, appropriate perceptual representations under certain circumstances. An alternative neural network architecture is presented here in which nodes compete for the right to receive inputs rather than for the right to generate outputs. This form of competition, implemented through preintegration lateral inhibition, does provide appropriate coding properties and can be used to learn such representations efficiently. Furthermore, this architecture is consistent with both neuroanatomical and neurophysiological data. We thus argue that preintegration lateral inhibition has computational advantages over conventional neural network architectures while remaining equally biologically plausible.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Krotov, Dmitry, and John J. Hopfield. "Unsupervised learning by competing hidden units." Proceedings of the National Academy of Sciences 116, no. 16 (March 29, 2019): 7723–31. http://dx.doi.org/10.1073/pnas.1820458116.

Повний текст джерела
Анотація:
It is widely believed that end-to-end training with the backpropagation algorithm is essential for learning good feature detectors in early layers of artificial neural networks, so that these detectors are useful for the task performed by the higher layers of that neural network. At the same time, the traditional form of backpropagation is biologically implausible. In the present paper we propose an unusual learning rule, which has a degree of biological plausibility and which is motivated by Hebb’s idea that change of the synapse strength should be local—i.e., should depend only on the activities of the pre- and postsynaptic neurons. We design a learning algorithm that utilizes global inhibition in the hidden layer and is capable of learning early feature detectors in a completely unsupervised way. These learned lower-layer feature detectors can be used to train higher-layer weights in a usual supervised way so that the performance of the full network is comparable to the performance of standard feedforward networks trained end-to-end with a backpropagation algorithm on simple tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Warstadt, Alex, Amanpreet Singh, and Samuel R. Bowman. "Neural Network Acceptability Judgments." Transactions of the Association for Computational Linguistics 7 (November 2019): 625–41. http://dx.doi.org/10.1162/tacl_a_00290.

Повний текст джерела
Анотація:
This paper investigates the ability of artificial neural networks to judge the grammatical acceptability of a sentence, with the goal of testing their linguistic competence. We introduce the Corpus of Linguistic Acceptability (CoLA), a set of 10,657 English sentences labeled as grammatical or ungrammatical from published linguistics literature. As baselines, we train several recurrent neural network models on acceptability classification, and find that our models outperform unsupervised models by Lau et al. (2016) on CoLA. Error-analysis on specific grammatical phenomena reveals that both Lau et al.’s models and ours learn systematic generalizations like subject-verb-object order. However, all models we test perform far below human level on a wide range of grammatical constructions.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Zhang, Bing. "Unsupervised English Intelligent Machine Translation in Wireless Network Environment." Security and Communication Networks 2022 (May 21, 2022): 1–9. http://dx.doi.org/10.1155/2022/8208242.

Повний текст джерела
Анотація:
Researchers suggest unsupervised English machine translation to address the absence of parallel corpus in English translation. Unsupervised pretraining techniques, denoising autoencoders, back translation, and shared latent representation mechanisms are used to simulate the translation task using just monolingual corpora. This paper uses pseudo-parallel data to construct unsupervised neural machine translation (NMT) and dissimilar language pair analysis. This paper firstly analyzes the low performance of unsupervised translation on dissimilar language pairs from three aspects: bilingual word embedding quality, shared words, and word order. And artificial shared word replacement and preordering strategies are proposed to increase the shared words between dissimilar language pairs and reduce the difference in their syntactic structure, thereby improving the translation performance on dissimilar language pairs. The denoising autoencoder and shared latent representation mechanism in unsupervised English machine translation are only required in the early stage of training, and learning the shared latent representation limits the further improvement of performance in different directions. While training the denoising autoencoder by repeatedly altering the training data slows down the convergence of the model, this is especially true for divergent languages. This paper presents an unsupervised NMT model based on pseudo-parallel data to address this issue. It trains two standard supervised neural machine translation models using the pseudo-parallel corpus generated by the unsupervised neural machine translation system, which enhances translation performance and speeds convergence. Finally, the English intelligent translation model is deployed in the wireless network server, and users can access it through the wireless network.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Chakravarty, Aniv, and Jagadish S. Kallimani. "Unsupervised Multi-Document Abstractive Summarization Using Recursive Neural Network with Attention Mechanism." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 3867–72. http://dx.doi.org/10.1166/jctn.2020.8976.

Повний текст джерела
Анотація:
Text summarization is an active field of research with a goal to provide short and meaningful gists from large amount of text documents. Extractive text summarization methods have been extensively studied where text is extracted from the documents to build summaries. There are various type of multi document ranging from different formats to domains and topics. With the recent advancement in technology and use of neural networks for text generation, interest for research in abstractive text summarization has increased significantly. The use of graph based methods which handle semantic information has shown significant results. When given a set of documents of English text files, we make use of abstractive method and predicate argument structures to retrieve necessary text information and pass it through a neural network for text generation. Recurrent neural networks are a subtype of recursive neural networks which try to predict the next sequence based on the current state and considering the information from previous states. The use of neural networks allows generation of summaries for long text sentences as well. This paper implements a semantic based filtering approach using a similarity matrix while keeping all stop-words. The similarity is calculated using semantic concepts and Jiang–Conrath similarity and making use of a recurrent neural network with an attention mechanism to generate summary. ROUGE score is used for measuring accuracy, precision and recall scores.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

ANGADI, ULAVAPPA B., and M. VENKATESULU. "FUZZYART NEURAL NETWORK FOR PROTEIN CLASSIFICATION." Journal of Bioinformatics and Computational Biology 08, no. 05 (October 2010): 825–41. http://dx.doi.org/10.1142/s0219720010004951.

Повний текст джерела
Анотація:
One of the major research directions in bioinformatics is that of predicting the protein superfamily in large databases and classifying a given set of protein domains into superfamilies. The classification reflects the structural, evolutionary and functional relatedness. These relationships are embodied in hierarchical classification such as Structural Classification of Protein (SCOP), which is manually curated. Such classification is essential for the structural and functional analysis of proteins. Yet, a large number of proteins remain unclassified. We have proposed an unsupervised machine-learning FuzzyART neural network algorithm to classify a given set of proteins into SCOP superfamilies. The proposed method is fast learning and uses an atypical non-linear pattern recognition technique. In this approach, we have constructed a similarity matrix from p-values of BLAST all-against-all, trained the network with FuzzyART unsupervised learning algorithm using the similarity matrix as input vectors and finally the trained network offers SCOP superfamily level classification. In this experiment, we have evaluated the performance of our method with existing techniques on six different datasets. We have shown that the trained network is able to classify a given similarity matrix of a set of sequences into SCOP superfamilies at high classification accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Guo, Wenzhe, Hasan Erdem Yantır, Mohammed E. Fouda, Ahmed M. Eltawil, and Khaled Nabil Salama. "Towards Efficient Neuromorphic Hardware: Unsupervised Adaptive Neuron Pruning." Electronics 9, no. 7 (June 27, 2020): 1059. http://dx.doi.org/10.3390/electronics9071059.

Повний текст джерела
Анотація:
To solve real-time challenges, neuromorphic systems generally require deep and complex network structures. Thus, it is crucial to search for effective solutions that can reduce network complexity, improve energy efficiency, and maintain high accuracy. To this end, we propose unsupervised pruning strategies that are focused on pruning neurons while training in spiking neural networks (SNNs) by utilizing network dynamics. The importance of neurons is determined by the fact that neurons that fire more spikes contribute more to network performance. Based on these criteria, we demonstrate that pruning with an adaptive spike count threshold provides a simple and effective approach that can reduce network size significantly and maintain high classification accuracy. The online adaptive pruning shows potential for developing energy-efficient training techniques due to less memory access and less weight-update computation. Furthermore, a parallel digital implementation scheme is proposed to implement spiking neural networks (SNNs) on field programmable gate array (FPGA). Notably, our proposed pruning strategies preserve the dense format of weight matrices, so the implementation architecture remains the same after network compression. The adaptive pruning strategy enables 2.3× reduction in memory size and 2.8× improvement on energy efficiency when 400 neurons are pruned from an 800-neuron network, while the loss of classification accuracy is 1.69%. And the best choice of pruning percentage depends on the trade-off among accuracy, memory, and energy. Therefore, this work offers a promising solution for effective network compression and energy-efficient hardware implementation of neuromorphic systems in real-time applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Xu, Jianqiao, Zhaolu Zuo, Danchao Wu, Bing Li, Xiaoni Li, and Deyi Kong. "Bearing Defect Detection with Unsupervised Neural Networks." Shock and Vibration 2021 (August 19, 2021): 1–11. http://dx.doi.org/10.1155/2021/9544809.

Повний текст джерела
Анотація:
Bearings always suffer from surface defects, such as scratches, black spots, and pits. Those surface defects have great effects on the quality and service life of bearings. Therefore, the defect detection of the bearing has always been the focus of the bearing quality control. Deep learning has been successfully applied to the objection detection due to its excellent performance. However, it is difficult to realize automatic detection of bearing surface defects based on data-driven-based deep learning due to few samples data of bearing defects on the actual production line. Sample preprocessing algorithm based on normalized sample symmetry of bearing is adopted to greatly increase the number of samples. Two different convolutional neural networks, supervised networks and unsupervised networks, are tested separately for the bearing defect detection. The first experiment adopts the supervised networks, and ResNet neural networks are selected as the supervised networks in this experiment. The experiment result shows that the AUC of the model is 0.8567, which is low for the actual use. Also, the positive and negative samples should be labelled manually. To improve the AUC of the model and the flexibility of the samples labelling, a new unsupervised neural network based on autoencoder networks is proposed. Gradients of the unlabeled data are used as labels, and autoencoder networks are created with U-net to predict the output. In the second experiment, positive samples of the supervised experiment are used as the training set. The experiment of the unsupervised neural networks shows that the AUC of the model is 0.9721. In this experiment, the AUC is higher than the first experiment, but the positive samples must be selected. To overcome this shortage, the dataset of the third experiment is the same as the supervised experiment, where all the positive and negative samples are mixed together, which means that there is no need to label the samples. This experiment shows that the AUC of the model is 0.9623. Although the AUC is slightly lower than that of the second experiment, the AUC is high enough for actual use. The experiment results demonstrate the feasibility and superiority of the proposed unsupervised networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Li, Bo, Zhipeng Yang, Zhuoran Jia, and Hao Ma. "An unsupervised learning neural network for planning UAV full-area reconnaissance path." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 39, no. 1 (February 2021): 77–84. http://dx.doi.org/10.1051/jnwpu/20213910077.

Повний текст джерела
Анотація:
To plan a UAV's full-area reconnaissance path under uncertain information conditions, an unsupervised learning neural network based on the genetic algorithm is proposed. Firstly, the environment model, the UAV model and evaluation indexes are presented, and the neural network model for planning the UAV's full-area reconnaissance path is established. Because it is difficult to obtain the training samples for planning the UAV's full-area reconnaissance path, the genetic algorithm is used to optimize the unsupervised learning neural network parameters. Compared with the traditional methods, the evaluation indexes constructed in this paper do not need to specify UAV maneuver rules. The offline learning method proposed in the paper has excellent transfer performances. The simulation results show that the UAV based on the unsupervised learning neural network can plan effective full-area reconnaissance paths in the unknown environments and complete full-area reconnaissance missions.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Lin, Yi-Nan, Tsang-Yen Hsieh, Cheng-Ying Yang, Victor RL Shen, Tony Tong-Ying Juang, and Wen-Hao Chen. "Deep Petri nets of unsupervised and supervised learning." Measurement and Control 53, no. 7-8 (June 9, 2020): 1267–77. http://dx.doi.org/10.1177/0020294020923375.

Повний текст джерела
Анотація:
Artificial intelligence is one of the hottest research topics in computer science. In general, when it comes to the needs to perform deep learning, the most intuitive and unique implementation method is to use neural network. But there are two shortcomings in neural network. First, it is not easy to be understood. When encountering the needs for implementation, it often requires a lot of relevant research efforts to implement the neural network. Second, the structure is complex. When constructing a perfect learning structure, in order to achieve the fully defined connection between nodes, the overall structure becomes complicated. It is hard for developers to track the parameter changes inside. Therefore, the goal of this article is to provide a more streamlined method so as to perform deep learning. A modified high-level fuzzy Petri net, called deep Petri net, is used to perform deep learning, in an attempt to propose a simple and easy structure and to track parameter changes, with faster speed than the deep neural network. The experimental results have shown that the deep Petri net performs better than the deep neural network.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

ORRE, ROLAND, ANDREW BATE, G. NIKLAS NORÉN, ERIK SWAHN, STEFAN ARNBORG, and I. RALPH EDWARDS. "A BAYESIAN RECURRENT NEURAL NETWORK FOR UNSUPERVISED PATTERN RECOGNITION IN LARGE INCOMPLETE DATA SETS." International Journal of Neural Systems 15, no. 03 (June 2005): 207–22. http://dx.doi.org/10.1142/s0129065705000219.

Повний текст джерела
Анотація:
A recurrent neural network, modified to handle highly incomplete training data is described. Unsupervised pattern recognition is demonstrated in the WHO database of adverse drug reactions. Comparison is made to a well established method, AutoClass, and the performances of both methods is investigated on simulated data. The neural network method performs comparably to AutoClass in simulated data, and better than AutoClass in real world data. With its better scaling properties, the neural network is a promising tool for unsupervised pattern recognition in huge databases of incomplete observations.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Sharma, Rachita, and Sanjay Kumar Dubey. "ANALYSIS OF SOM & SOFM TECHNIQUES USED IN SATELLITE IMAGERY." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 4, no. 2 (June 21, 2018): 563–65. http://dx.doi.org/10.24297/ijct.v4i2c1.4181.

Повний текст джерела
Анотація:
This paper describes the introduction of Supervised and Unsupervised Techniques with the comparison of SOFM (Self Organized Feature Map) used for Satellite Imagery. In this we have explained the way of spatial and temporal changes detection used in forecasting in satellite imagery. Forecasting is based on time series of images using Artificial Neural Network. Recently neural networks have gained a lot of interest in time series prediction due to their ability to learn effectively nonlinear dependencies from large volume of possibly noisy data with a learning algorithm. Unsupervised neural networks reveal useful information from the temporal sequence and they reported power in cluster analysis and dimensionality reduction. In unsupervised learning, no pre classification and pre labeling of the input data is needed. SOFM is one of the unsupervised neural network used for time series prediction .In time series prediction the goal is to construct a model that can predict the future of the measured process under interest. There are various approaches to time series prediction that have been used over the years. It is a research area having application in diverse fields like weather forecasting, speech recognition, remote sensing. Advances in remote sensing technology and availability of high resolution images in recent years have motivated many researchers to study patterns in the images for the purpose of trend analysis
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Al-Khasawneh, Ahmad. "Diagnosis of Breast Cancer Using Intelligent Information Systems Techniques." International Journal of E-Health and Medical Communications 7, no. 1 (January 2016): 65–75. http://dx.doi.org/10.4018/ijehmc.2016010104.

Повний текст джерела
Анотація:
Breast cancer is the second leading cause of cancer deaths in women worldwide. Early diagnosis of this illness can increase the chances of long-term survival of cancerous patients. To help in this aid, computerized breast cancer diagnosis systems are being developed. Machine learning algorithms and data mining techniques play a central role in the diagnosis. This paper describes neural network based approaches to breast cancer diagnosis. The aim of this research is to investigate and compare the performance of supervised and unsupervised neural networks in diagnosing breast cancer. A multilayer perceptron has been implemented as a supervised neural network and a self-organizing map as an unsupervised one. Both models were simulated using a variety of parameters and tested using several combinations of those parameters in independent experiments. It was concluded that the multilayer perceptron neural network outperforms Kohonen's self-organizing maps in diagnosing breast cancer even with small data sets.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Yu, Rui, Rui Xiang, Zhi Wu Ke, Shi Wei Yao, and Xin Wan. "Fault Diagnosis of Condenser in Ship Steam Power System Based on Unsupervised Learning Neural Network." Applied Mechanics and Materials 271-272 (December 2012): 1568–72. http://dx.doi.org/10.4028/www.scientific.net/amm.271-272.1568.

Повний текст джерела
Анотація:
A diagnostic method is proposed for the faults of the condenser in the ship steam power system by using the unsupervised learning neural network. First, we analyzed the reasons leading to the condenser faults according to the operating features of the condenser in the steam power system. Combined with the expert knowledge, we summed up the training sample model for the fault diagnosis of the condenser. Then we adopted two types of unsupervised learning neural network to diagnose the fault of the condenser. The diagnostic method was proved to be rapid and accurate by test. Finally, we analyzed and compared the performance and the optimizing approach of the unsupervised learning neural network for fault diagnosis. The diagnostic method is of guiding significance for the safe operation of the ship steam power system.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Kim, Gyuwon, and Seungchul Lee. "Generative Adversarial Neural Network for Unsupervised Bearing Fault Detection." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, no. 3 (August 1, 2021): 3643–48. http://dx.doi.org/10.3397/in-2021-2479.

Повний текст джерела
Анотація:
Detecting bearing faults in advance is critical for mechanical and electrical systems to prevent economic loss and safety hazards. As part of the recent interest in artificial intelligence, deep learning (DL)-based principles have gained much attention in intelligent fault diagnostics and have mainly been developed in a supervised manner. While these works have shown promising results, several technical setbacks are inherent in a supervised learning setting. Data imbalance is a critical problem as faulty data is scarce in many cases, data labeling is tedious, and unseen cases of faults cannot be detected in a supervised framework. Herein, a generative adversarial network (GAN) is proposed to achieve unsupervised bearing fault diagnostics by utilizing only the normal data. The proposed method first adopts the short-time Fourier transform (STFT) to convert the 1-D vibration signals into 2-D time-frequency representations to use as the input to our (DL) framework. Subsequently, a GAN-based latent mapping is constructed using only the normal data, and faulty signals are detected using an anomaly metric comprised of a discriminator error and an image reconstruction error. The performance of our method is verified using a classic rotating machinery dataset (Case Western Reserve bearing dataset), and the experimental results demonstrate that our method can not only detect the faults but can also cluster the faults in the latent space with high accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Li, Changsheng, Handong Ma, Ye Yuan, Guoren Wang, and Dong Xu. "Structure Guided Deep Neural Network for Unsupervised Active Learning." IEEE Transactions on Image Processing 31 (2022): 2767–81. http://dx.doi.org/10.1109/tip.2022.3161076.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Rahman, S. M. Monzurur, Xinghuo Yu, and F. A. Siddiky. "An unsupervised neural network approach to predictive data mining." International Journal of Data Mining, Modelling and Management 3, no. 1 (2011): 1. http://dx.doi.org/10.1504/ijdmmm.2011.038809.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Zhao, Xiaowei, and Ping Li. "Bilingual lexical interactions in an unsupervised neural network model." International Journal of Bilingual Education and Bilingualism 13, no. 5 (September 2010): 505–24. http://dx.doi.org/10.1080/13670050.2010.488284.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

BURKE, LAURA I. "AN UNSUPERVISED NEURAL NETWORK APPROACH TO TOOL WEAR IDENTIFICATION." IIE Transactions 25, no. 1 (January 1993): 16–25. http://dx.doi.org/10.1080/07408179308964262.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Dotan, Y., and N. Intrator. "Multimodality exploration by an unsupervised projection pursuit neural network." IEEE Transactions on Neural Networks 9, no. 3 (May 1998): 464–72. http://dx.doi.org/10.1109/72.668888.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Jammu, V. B., K. Danai, and S. Malkin. "Unsupervised Neural Network for Tool Breakage Detection in Turning." CIRP Annals - Manufacturing Technology 42, no. 1 (January 1993): 67–70. http://dx.doi.org/10.1016/s0007-8506(07)62393-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Zhang, Yongshan, Jia Wu, Zhihua Cai, Bo Du, and Philip S. Yu. "An unsupervised parameter learning model for RVFL neural network." Neural Networks 112 (April 2019): 85–97. http://dx.doi.org/10.1016/j.neunet.2019.01.007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Fu, Qiang, and Hongbin Dong. "An ensemble unsupervised spiking neural network for objective recognition." Neurocomputing 419 (January 2021): 47–58. http://dx.doi.org/10.1016/j.neucom.2020.07.109.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Benaim, M. "A Stochastic Model of Neural Network for Unsupervised Learning." Europhysics Letters (EPL) 19, no. 3 (June 1, 1992): 241–46. http://dx.doi.org/10.1209/0295-5075/19/3/015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Nissani, D. N. "An unsupervised hyperspheric multi-layer feedforward neural network model." Biological Cybernetics 65, no. 6 (October 1991): 441–50. http://dx.doi.org/10.1007/bf00204657.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Liu, Jia, Maoguo Gong, A. K. Qin, and Kay Chen Tan. "Bipartite Differential Neural Network for Unsupervised Image Change Detection." IEEE Transactions on Neural Networks and Learning Systems 31, no. 3 (March 2020): 876–90. http://dx.doi.org/10.1109/tnnls.2019.2910571.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Yang, Zihan, Mingyi Gao, Junfeng Zhang, Yuanyuan Ma, Wei Chen, Yonghu Yan, and Gangxiang Shen. "Unsupervised Neural Network for Modulation Format Discrimination and Identification." IEEE Access 7 (2019): 70077–87. http://dx.doi.org/10.1109/access.2019.2916806.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Intrator, Nathan, Daniel Reisfeld, and Yehezkel Yeshurun. "Face recognition using a hybrid supervised/unsupervised neural network." Pattern Recognition Letters 17, no. 1 (January 1996): 67–76. http://dx.doi.org/10.1016/0167-8655(95)00089-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Tambouratzis, G. "Image segmentation with the SOLNN unsupervised logic neural network." Neural Computing & Applications 6, no. 2 (June 1997): 91–101. http://dx.doi.org/10.1007/bf01414006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії